Wednesday, November 21, 2018
Home Blog

Basic of SQL (Structured Query Language)

0

SQL (Structured Query Language) is a domain-specific language used in programming and designed for querying a database. As with any language, it can useful to have a list of common queries and function names as a reference.

 

Basic keywords

Before we delve in to some basic common queries, lets take a look at some of the keywords that you’ll come across:

Keyword Explanation
SELECT Used to state which columns to query. Use * for all
FROM Declares which table/view etc to select from
WHERE Introduces a condition
= Used for comparing a value to a specified input
LIKE Special operator used with the WHERE clause to search for a specific pattern in a column
GROUP BY Arranges identical data into groups
HAVING Specifies that only rows where aggregate values meet the specified conditions should be returned. Used because the WHERE keyword cannot be used with aggregate functions
INNER JOIN Returns all rows where key record of one table is equal to key records of another
LEFT JOIN Returns all rows from the ‘left’ (1st) table with the matching rows in the right (2nd)
RIGHT JOIN Returns all rows from the ‘right’ (2nd) table with the matching rows in the left (1st)
FULL OUTER JOIN Returns rows that match either in the left or right table

 

 

Reporting aggregate functions

 

In database management, an aggregate function is a function where the values of multiples rows are grouped to form a single value. They are useful for reporting and some examples of common aggregate functions can be found below:

Function Explanation
COUNT Return the number of rows in a certain table/view
SUM Accumulate the values
AVG Returns the average for a group of values
MIN Returns the smallest value of the group
MAX Returns the largest value of the group

 

Querying data from a table

 

A database table is a set of data elements (values) stored in a model of vertical columns and horizontal rows. Use any of the below to query a table in SQL:

SQL Explanation
SELECT c1 FROM t Select data in column c1 from a table named t
SELECT * FROM t Select all rows and columns from a table named t
SELECT c1 FROM t

WHERE c1 = ‘test’

Select data in column c1 from a table named t where the value in c1 = ‘test’
SELECT c1 FROM t

ORDER BY c1 ASC (DESC)

Select data in column c1 from a table name t and order by c1, either in ascending (default) or descending order
SELECT c1 FROM t

ORDER BY c1LIMIT n OFFSET offset

Select data in column c1 from a table named t and skip offset of rows and return the next n rows
SELECT c1, aggregate(c2)

FROM t

GROUP BY c1

Select data in column c1 from a table named t and group rows using an aggregate function
SELECT c1, aggregate(c2)

FROM t

GROUP BY c1HAVING condition

Select data in column c1 from a table named t and group rows using an aggregate function and filter these groups using ‘HAVING’ clause

 

Querying data from multiple tables

 

As well as querying from a single table, SQL gives you the ability to query data from multiple tables:

SQL Explanation
SELECT c1, c2

FROM t1

INNER JOIN t2 on condition

Select columns c1 and c2 from a table named t1 and perform an inner join between t1 and t2
SELECT c1, c2

FROM t1

LEFT JOIN t2 on condition

Select columns c1 and c2 from a table named t1 and perform a left join between t1 and t2
SELECT c1, c2

FROM t1

RIGHT JOIN t2 on condition

Select columns c1 and c2 from a table named t1 and perform a right join between t1 and t2
SELECT c1, c2

FROM t1

FULL OUTER JOIN t2 on condition

Select columns c1 and c2 from a table named t1 and perform a full outer join between t1 and t2
SELECT c1, c2

FROM t1

CROSS JOIN t2

Select columns c1 and c2 from a table named t1 and produce a Cartesian product of rows in tables
SELECT c1, c2

FROM t1, t2

Same as above – Select columns c1 and c2 from a table named t1 and produce a Cartesian product of rows in tables
SELECT c1, c2

FROM t1 A

INNER JOIN t2 B on condition

Select columns c1 and c2 from a table named t1 and joint it to itself using an INNER JOIN clause

 

Using SQL Operators

 

SQL operators are reserved words or characters used primarily in an SQL statement where clause to perform operations:

SQL Explanation
SELECT c1 FROM t1

UNION [ALL]

SELECT c1 FROM t2

Select column c1 from a table named t1 and column c1 from a table named t2 and combine the rows from these two queries
SELECT c1 FROM t1

INTERSECT

SELECT c1 FROM t2

Select column c1 from a table named t1 and column c1 from a table named t2 and return the intersection of two queries
SELECT c1 FROM t1

MINUS

SELECT c1 FROM t2

Select column c1 from a table named t1 and column c1 from a table named t2 and subtract the 2nd result set from the 1st
SELECT c1 FROM t

WHERE c1 [NOT] LIKE pattern

Select column c1 from a table named t and query the rows using pattern matching %
SELECT c1 FROM t

WHERE c1 [NOT] in test_list

Select column c1 from a table name t and return the rows that are (or are not) in test_list
SELECT c1 FROM t

WHERE c1 BETWEEN min AND max

Select column c1 from a table named t and return the rows where c1 is between min and max
SELECT c1 FROM t

WHERE c1 IS [NOT] NULL

Select column c1 from a table named t and check if the values are NULL or not

 

Data modification

 

Data modification is a key part of SQL, giving the ability to not only add and delete data, but modify existing records:

SQL Explanation
INSERT INTO t(column_list)

VALUES(value_list)

Insert one row into a table named t
INSERT INTO t(column_list)

VALUES (value_list), (value_list), …

Insert multiple rows into a table named t
INSERT INTO t1(column_list)

SELECT column_list FROM t2

Insert rows from t2 into a table named t1
UPDATE tSET c1 = new_value Update a new value in table t in the column c1 for all rows
UPDATE tSET c1 = new_value, c2 = new_value

WHERE condition

Update values in column c1 and c2 in table t that match the condition
DELETE FROM t Delete all the rows from a table named t
DELETE FROM tWHERE condition Delete all rows from that a table named t that match a certain condition

 

Views

A view is a virtual table that is a result of a query. They can be extremely useful and are often used as a security mechanism, letting users access the data through the view, rather than letting them access the underlying base table:

SQL Explanation
CREATE VIEW view1 AS

SELECT c1, c2

FROM t1

WHERE condition

Create a view, comprising of columns c1 and c2 from a table named t1 where a certain condition has been met.

 

Indexes

An index is used to speed up the performance of queries by reducing the number of database pages that have to be visited:

SQL Explanation
CREATE INDEX index_nameON t(c1, c2) Create an index on columns c1 and c2 of the table t
CREATE UNIQUE INDEX index_name

ON t(c3, c4)

Create a unique index on columns c3 and c4 of the table t
DROP INDEX index_name Drop an index

 

Stored procedure

 

A stored procedure is a set of SQL statements with an assigned name that can then be easily reused and share by multiple programs:

SQL Explanation
CREATE PROCEDURE procedure_name

    @variable AS datatype = value

AS

   — Comments

SELECT * FROM tGO

Create a procedure called procedure_name, create a local variable and then select from table t

 

Triggers

 

A trigger is a special type of stored procedure that automatically executes when a user tries to modify data through a DML event (data manipulation language). A DML event is an INSERT, UPDATE or DELETE statement on a table or view:

SQL Explanation
CREATE OR MODIFY TRIGGER trigger_name

WHEN EVENT

ON table_name TRIGGER_TYPE

EXECUTE stored_procedure

WHEN:

  • BEFORE – invoke before the event occurs
  • AFTER – invoke after the event occurs

EVENT:

  • INSERT – invoke for insert
  • UPDATE – invoke for update
  • DELETE – invoke for delete

TRIGGER_TYPE:

  • FOR EACH ROW
  • FOR EACH STATEMENT
DROP TRIGGER trigger_name Delete a specific trigger

 

 

How the blockchain could break Big Tech’s hold on AI

0

Pairing artificial intelligence and the blockchain might be what you would expect from a scammer looking to make a quick buck in 2018.

The two concepts, after all, are two of the most buzzed about and least understood ideas in the tech universe.

And the blockchain, the database design introduced by bitcoin, has lately been the most popular route for anyone looking to raise money for an idea that sounds too good to be true.

Despite how easy the combination is to mock, the idea of applying the blockchain to AI is attracting a growing roster of serious entrepreneurs and venture capitalists, many of them with impressive academic credentials.

Many AI experts are concerned that Facebook, Google and a few other big companies are hoarding talent. They also control huge troves of online data that are necessary to train and refine the best machine learning programs.

“It’s important to have machine learning capabilities that are more under the user’s control, rather than relying on these big companies to get access to these capabilities,”

The start-ups working toward this goal are applying blockchains in several ways. At the most basic level, just as the blockchain allows money to be moved around without any bank or central authority in the middle, AI experts are hoping a blockchain can allow artificial intelligence networks to access large stores of data without any big company controlling the data or algorithms.

Several start-ups are setting up blockchain marketplaces, where people can buy and sell data.

Ocean Protocol, a project based in Berlin, is building the infrastructure so that anyone can set up a marketplace for any kind of data, with the users of data paying the sources with digital tokens.

Unlike Google and Facebook, which store the data they get from users, the marketplaces built on Ocean Protocol will not have the data themselves; they will just be places for people with data to meet, ensuring that no central player can access or exploit the data.

“Blockchains are incentive machines — you can get people to do stuff by paying them,” said Trent McConaghy, one of the founders of Ocean Protocol, who has been working in AI since the 1990s.

The goal, McConaghy said, is to “decentralize access to data before it’s too late.”

Another start-up, Revel, will pay people to collect the data that companies are looking for, like pictures of taxis or recordings of a particular language. Users can also let their phones and computers be used to process and categorize the images and sounds — all in exchange for digital tokens. Over a thousand people already have put their computers to work.

These sorts of marketplaces are only the outer layer of the blockchain systems that are being built to handle AI data.

 

One of the biggest concerns that people have about the data being collected by Google and Facebook is the access it gives these companies to the most private details of our lives.

One project building on top of Oasis, known as Kara, will allow medical researchers looking at the behavior of specific diseases to train their machine learning models with data from actual patients, without the data ever being exposed.

Other start-ups are using blockchains to open access to the AI models themselves. Goertzel has created SingularityNET, a blockchain that will serve as a link among AI services around the world. If one AI module is unable to come up with an answer, it can consult with others, and provide compensation if one of the other modules is able to get it right.

Hanson Robotics is planning to use SingularityNET to feed information into its humanoid robot, Sophia. Unlike Amazon’s Alexa service, which answers questions using services approved by Amazon, Goertzel wants Sophia to reach out to other AI providers if she can’t find the right answer.

 

Introduction to Software & Concepts of Programming

0

This file covers the following topics:

1. SDLC: Software Development Life Cycle

2. PDLC: Program Development Life Cycle

a.) Defining the Problem
b.) Designing the Program
c.) Coding the Program
d.) Testing and Debugging the Program
e.) Formalizing the Solution
f.) Maintaining the Program

3. Program Design Tools

a.) Flow Charts
b.) Decision Tables
c.) Pseudocode (Algorithm)

4. Control Structures

a.) Branching Structures

i.) If
ii.) Nested If
iii.) If Else

b.) Looping Structures

i.) While
ii.) Do While
iii.) For

5. Computer Languages

a.) Low Level Languages
b.) High Level Languages

6. Generations of Programming Languages

a.) First Generation Languages
b.) Second Generation Languages
c.) Fourth Generation Languages
d.) Fifth Generation Languages

7. Languages Translators

a.) Compiler
b.) Interpreter

8. Programming Languages

a.) Common Business oriented Language (COBOL)
b.) BASIC
c.) Pascal
d.) C
e.) Ada
f.)  C++
g.) Visual Basic
h.) JAVA

9. Concept on Object Oriented Programming

10. Features of OOPs

a.) Class
b.) Object
c.) Abstraction
d.) Encapsulation
e.) Polymorphism
f.) Inheritance

Click to Download – Introduction to Software & Concept Programming (PDF File)

Everything you need to know about Artificial Intelligence

0

An executive guide to artificial intelligence, from machine learning and general AI to neural networks.

 

What is artificial intelligence (AI)?

 

Back in the 1950s, the fathers of the field Minsky and McCarthy described artificial intelligence as any task performed by a program or a machine that, if a human carried out the same activity, we would say the human had to apply intelligence to accomplish the task.

That obviously is a broad definition, which is why you will sometimes see arguments over whether something is truly AI or not.

AI systems will typically demonstrate at least some of the following behaviors associated with human intelligence: planning, learning, reasoning, problem-solving, knowledge representation, perception, motion, and manipulation and, to a lesser extent, social intelligence and creativity.

 

What are the uses for AI?

 

AI is ubiquitous today, used to recommend what you should buy next online, to understand what you say to virtual assistants such as Amazon’s Alexa and Apple’s Siri, to recognize who and what is in a photo, to spot spam, or detect credit card fraud.

 

What are the different types of AI?

 

At a very high-level artificial intelligence can be split into two broad types: Narrow AI and General AI.

Narrow AI is what we see all around us in computers today: intelligent systems that have been taught or learned how to carry out specific tasks without being explicitly programmed how to do so.

This type of machine intelligence is evident in the speech and language recognition of the Siri virtual assistant on the Apple iPhone, in the vision-recognition systems on self-driving cars, in the recommendation engines that suggest products you might like based on what you bought in the past. Unlike humans, these systems can only learn or be taught how to do specific tasks, which is why they are called narrow AI.

 

What can Narrow AI do?

 

There are a vast number of emerging applications for narrow AI: interpreting video feeds from drones carrying out visual inspections of infrastructure such as oil pipelines, organizing personal and business calendars, responding to simple customer-service queries, co-ordinating with other intelligent systems to carry out tasks like booking a hotel at a suitable time and location, helping radiologists to spot potential tumors in X-rays, flagging inappropriate content online, detecting wear and tear in elevators from data gathered by IoT devices, the list goes on and on.

 

What can General AI do?

 

Artificial general intelligence is very different and is the type of adaptable intellect found in humans, a flexible form of intelligence capable of learning how to carry out vastly different tasks, anything from haircutting to building spreadsheets, or to reason about a wide variety of topics based on its accumulated experience. This is the sort of AI more commonly seen in movies, the likes of HAL in 2001 or Skynet in The Terminator, but which doesn’t exist today, and AI experts are fiercely divided over how soon it will become a reality.

A survey conducted among four groups of experts in 2012-2013 by AI researchers Vincent C Müller and Philosopher Nick Bostrom reported a 50% chance that Artificial General Intelligence (AGI) would be developed between 2040 and 2050, rising to 90% by 2075. The group went even further, predicting that so-called ‘superintelligence‘ — which Bostrom defines as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest” — was expected some 30 years after the achievement of AGI.

That said, some AI experts believe such projections are wildly optimistic given our limited understanding of the human brain and believe that AGI is still centuries away.

 

What is Machine Learning?

 

There is a broad body of research in AI, much of which feeds into and complements each other.

Currently enjoying something of a resurgence, machine learning is where a computer system is fed large amounts of data, which it then uses to learn how to carry out a specific task, such as understanding speech or captioning a photograph.

 

What are Neural Networks?

 

Key to the process of machine learning are neural networks. These are brain-inspired networks of interconnected layers of algorithms, called neurons, that feed data into each other, and which can be trained to carry out specific tasks by modifying the importance attributed to input data as it passes between the layers. During training of these neural networks, the weights attached to different inputs will continue to be varied until the output from the neural network is very close to what is desired, at which point the network will have ‘learned’ how to carry out a particular task.

A subset of machine learning is deep learning, where neural networks are expanded into sprawling networks with a huge number of layers that are trained using massive amounts of data. It is these deep neural networks that have fueled the current leap forward in the ability of computers to carry out task like speech recognition and computer vision.

There are various types of neural networks, with different strengths and weaknesses. Recurrent neural networks are a type of neural net particularly well suited to language processing and speech recognition, while convolutional neural networks are more commonly used in image recognition. The design of neural networks is also evolving, with researchers recently refining a more effective form of deep neural network called long short-term memory or LSTM, allowing it to operate fast enough to be used in on-demand systems like Google Translate.

Another area of AI research is an evolutionary computation, which borrows from Darwin’s famous theory of natural selection, and sees genetic algorithms undergo random mutations and combinations between generations in an attempt to evolve the optimal solution to a given problem.

This approach has even been used to help design AI models, effectively using AI to help build AI. This use of evolutionary algorithms to optimize neural networks is called neuroevolution and could have an important role to play in helping design efficient AI as the use of intelligent systems becomes more prevalent, particularly as demand for data scientists often outstrips supply. The technique was recently showcased by Uber AI Labs, which released papers on using genetic algorithms to train deep neural networks for reinforcement learning problems.

Finally, there are expert systems, where computers are programmed with rules that allow them to take a series of decisions based on a large number of inputs, allowing that machine to mimic the behavior of a human expert in a specific domain. An example of these knowledge-based systems might be, for example, an autopilot system flying a plane.

 

What is fueling the resurgence in AI?

 

The biggest breakthroughs for AI research in recent years have been in the field of machine learning, in particular within the field of deep learning.

This has been driven in part by the easy availability of data, but even more so by an explosion in parallel computing power in recent years, during which time the use of GPU clusters to train machine-learning systems has become more prevalent.

Not only do these clusters offer vastly more powerful systems for training machine-learning models, but they are now widely available as cloud services over the internet. Over time the major tech firms, the likes of Google and Microsoft, have moved to use specialized chips tailored to both running, and more recently training, machine-learning models.

An example of one of these custom chips is Google’s Tensor Processing Unit (TPU), the latest version of which accelerates the rate at which useful machine-learning models built using Google’s TensorFlow software library can infer information from data, as well as the rate at which they can be trained.

These chips are not just used to train up models for DeepMind and Google Brain, but also the models that underpin Google Translate and the image recognition in Google Photo, as well as services that allow the public to build machine learning models using Google’s TensorFlow Research Cloud. The second generation of these chips was unveiled at Google’s I/O conference in May last year, with an array of these new TPUs able to train a Google machine-learning model used for translation in half the time it would take an array of the top-end graphics processing units (GPUs).

 

What are the elements of machine learning?

 

As mentioned, machine learning is a subset of AI and is generally split into two main categories: supervised and unsupervised learning.

 

Supervised learning

 

A common technique for teaching AI systems is by training them using a very large number of labeled examples. These machine-learning systems are fed huge amounts of data, which has been annotated to highlight the features of interest. These might be photos labeled to indicate whether they contain a dog or write sentences that have footnotes to indicate whether the word ‘bass’ relates to music or a fish. Once trained, the system can then apply these labels can to new data, for example to a dog in a photo that’s just been uploaded.

This process of teaching a machine by example is called supervised learning and the role of labeling these examples is commonly carried out by online workers, employed through platforms like Amazon Mechanical Turk.

Training these systems typically requires vast amounts of data, with some systems needing to scour millions of examples to learn how to carry out a task effectively — although this is increasingly possible in an age of big data and widespread data mining. Training datasets are huge and growing in size — Google’s Open Images Dataset has about nine million images, while its labeled video repository YouTube-8M links to seven million labeled videos. ImageNet, one of the early databases of this kind, has more than 14 million categorized images. Compiled over two years, it was put together by nearly 50,000 people — most of whom were recruited through Amazon Mechanical Turk — who checked, sorted, and labeled almost one billion candidate pictures.

In the long run, having access to huge labeled datasets may also prove less important than access to large amounts of compute power.

In recent years, Generative Adversarial Networks ( GANs) have shown how machine-learning systems that are fed a small amount of labeled data can then generate huge amounts of fresh data to teach themselves.

This approach could lead to the rise of semi-supervised learning, where systems can learn how to carry out tasks using a far smaller amount of labeled data than is necessary for training systems using supervised learning today.

 

Unsupervised learning

 

In contrast, unsupervised learning uses a different approach, where algorithms try to identify patterns in data, looking for similarities that can be used to categorize that data.

An example might be clustering together fruits that weigh a similar amount or cars with a similar engine size.

The algorithm isn’t set up in advance to pick out specific types of data, it simply looks for data that can be grouped by its similarities, for example, Google News grouping together stories on similar topics each day.

 

Reinforcement learning

 

A crude analogy for reinforcement learning is rewarding a pet with a treat when it performs a trick.

In reinforcement learning, the system attempts to maximize a reward based on its input data, basically going through a process of trial and error until it arrives at the best possible outcome.

An example of reinforcement learning is Google DeepMind’s Deep Q-network, which has been used to best human performance in a variety of classic video games. The system is fed pixels from each game and determines various information, such as the distance between objects on the screen.

By also looking at the score achieved in each game the system builds a model of which action will maximize the score in different circumstances, for instance, in the case of the video game Breakout, where the paddle should be moved to in order to intercept the ball.

 

Which are the leading firms in AI?

 

With AI playing an increasingly major role in modern software and services, each of the major tech firms is battling to develop robust machine-learning technology for use in-house and to sell to the public via cloud services.

Each regularly makes headlines for breaking new ground in AI research, although it is probably Google with its DeepMind AI AlphaGo that has probably made the biggest impact on the public awareness of AI.

 

Which AI services are available?

 

All of the major cloud platforms — Amazon Web Services, Microsoft Azure and Google Cloud Platform — provide access to GPU arrays for training and running machine learning models, with Google also gearing up to let users use its Tensor Processing Units — custom chips whose design is optimized for training and running machine-learning models.

All of the necessary associated infrastructure and services are available from the big three, the cloud-based data stores, capable of holding the vast amount of data needed to train machine-learning models, services to transform data to prepare it for analysis, visualization tools to display the results clearly, and software that simplifies the building of models.

These cloud platforms are even simplifying the creation of custom machine-learning models, with Google recently revealing a service that automates the creation of AI models, called Cloud AutoML. This drag-and-drop service builds custom image-recognition models and requires the user to have no machine-learning expertise.

Cloud-based, machine-learning services are constantly evolving, and at the start of 2018, Amazon revealed a host of new AWS offerings designed to streamline the process of training up machine-learning models.

For those firms that don’t want to build their own machine learning models but instead want to consume AI-powered, on-demand services — such as voice, vision, and language recognition — Microsoft Azure stands out for the breadth of services on offer, closely followed by Google Cloud Platform and then AWS. Meanwhile IBM, alongside its more general on-demand offerings, is also attempting to sell sector-specific AI services aimed at everything from health care to retail, grouping these offerings together under its IBM Watson umbrella — and recently investing $2bn in buying The Weather Channel to unlock a trove of data to augment its AI services.

 

Which of the major tech firms is winning the AI race?

 

Internally, each of the tech giants — and others such as Facebook — use AI to help drive myriad public services: serving search results, offering recommendations, recognizing people and things in photos, on-demand translation, spotting spam — the list is extensive.

But one of the most visible manifestations of this AI war has been the rise of virtual assistants, such as Apple’s Siri, Amazon’s Alexa, the Google Assistant, and Microsoft Cortana.

Relying heavily on voice recognition and natural-language processing, as well as needing an immense corpus to draw upon to answer queries, a huge amount of tech goes into developing these assistants.

But while Apple’s Siri may have come to prominence first, it is Google and Amazon whose assistants have since overtaken Apple in the AI space — Google Assistant with its ability to answer a wide range of queries and Amazon’s Alexa with the massive number of ‘Skills’ that third-party devs have created to add to its capabilities.

 

Which countries are leading the way in AI?

 

It’d be a big mistake to think the US tech giants have the field of AI sewn up. Chinese firms Alibaba, Baidu, and Lenovo are investing heavily in AI in fields ranging from e-commerce to autonomous driving. As a country, China is pursuing a three-step plan to turn AI into a core industry for the country, one that will be worth 150 billion yuan ($22bn) by 2020.

Baidu has invested in developing self-driving cars, powered by its deep learning algorithm, Baidu AutoBrain, and, following several years of tests, plans to roll out fully autonomous vehicles in 2018 and mass-produce them by 2021.

Baidu has also partnered with Nvidia to use AI to create a cloud-to-car autonomous car platform for auto manufacturers around the world.

The combination of weak privacy laws, huge investment, concerted data-gathering, and big data analytics by major firms like Baidu, Alibaba, and Tencent, means that some analysts believe China will have an advantage over the US when it comes to future AI research, with one analyst describing the chances of China taking the lead over the US as 500 to one in China’s favor.

 

How can I get started with AI?

 

While you could try to build your own GPU array at home and start training a machine-learning model, probably the easiest way to experiment with AI-related services is via the cloud.

All of the major tech firms offer various AI services, from the infrastructure to build and train your own machine-learning models through to web services that allow you to access AI-powered tools such as speech, language, vision and sentiment recognition on demand.

 

How will AI change the world?

 

Robots and driverless cars

The desire for robots to be able to act autonomously and understand and navigate the world around them means there is a natural overlap between robotics and AI. While AI is only one of the technologies used in robotics, use of AI is helping robots move into new areas such as self-driving cars, delivery robots, as well as helping robots to learn new skills. General Motors recently said it would build a driverless car without a steering wheel or pedals by 2019, while Ford committed to doing so by 2021, and Waymo, the self-driving group inside Google parent Alphabet, will soon offer a driverless taxi service in Phoenix.

 

Fake news

 

We are on the verge of having neural networks that can create photo-realistic images or replicate someone’s voice in a pitch-perfect fashion. With that comes the potential for hugely disruptive social change, such as no longer being able to trust video or audio footage as genuine. Concerns are also starting to be raised about how such technologies will be used to misappropriate people’s image, with tools already being created to convincingly splice famous actresses into adult films.

Speech and language recognition

Machine-learning systems have helped computers recognize what people are saying with an accuracy of almost 95 percent. Recently Microsoft’s Artificial Intelligence and Research group reported it had developed a system able to transcribe spoken English as accurately as human transcribers.

With researchers pursuing a goal of 99 percent accuracy, expect speaking to computers to become the norm alongside more traditional forms of human-machine interaction.

Facial recognition and surveillance

In recent years, the accuracy of facial-recognition systems has leapt forward, to the point where Chinese tech giant Baidu says it can match faces with 99 percent accuracy, providing the face is clear enough on the video. While police forces in western countries have generally only trialled using facial-recognition systems at large events, in China the authorities are mounting a nationwide program to connect CCTV across the country to facial recognition and to use AI systems to track suspects and suspicious behavior, and are also trialling the use of facial-recognition glasses by police.

Although privacy regulations vary across the world, it’s likely this more intrusive use of AI technology — including AI that can recognize emotions — will gradually become more widespread elsewhere.

Healthcare

AI could eventually have a dramatic impact on healthcare, helping radiologists to pick out tumors in x-rays, aiding researchers in spotting genetic sequences related to diseases and identifying molecules that could lead to more effective drugs.

There have been trials of AI-related technology in hospitals across the world. These include IBM’s Watson clinical decision support tool, which is trained by oncologists at Memorial Sloan Kettering Cancer Center, and the use of Google DeepMind systems by the UK’s National Health Service, where it will help spot eye abnormalities and streamline the process of screening patients for head and neck cancers.

Will AI kill us all?

Again, it depends who you ask. As AI-powered systems have grown more capable, so warnings of the downsides have become more dire.

Tesla and SpaceX CEO Elon Musk has claimed that AI is a “fundamental risk to the existence of human civilization”. As part of his push for stronger regulatory oversight and more responsible research into mitigating the downsides of AI he set up OpenAI, a non-profit artificial intelligence research company that aims to promote and develop friendly AI that will benefit society as a whole. Similarly, the esteemed physicist Stephen Hawking has warned that once a sufficiently advanced AI is created it will rapidly advance to the point at which it vastly outstrips human capabilities, a phenomenon known as the singularity, and could pose an existential threat to the human race.

Yet the notion that humanity is on the verge of an AI explosion that will dwarf our intellect seems ludicrous to some AI researchers.

Chris Bishop, Microsoft’s director of research in Cambridge, England, stresses how different the narrow intelligence of AI today is from the general intelligence of humans, saying that when people worry about “Terminator and the rise of the machines and so on? Utter nonsense, yes. At best, such discussions are decades away.”

Will an AI steal your job?

  • The possibility of artificially intelligent systems replacing much of modern manual labor is perhaps a more credible near-future possibility.
  • While AI won’t replace all jobs, what seems to be certain is that AI will change the nature of work, with the only question being how rapidly and how profoundly automation will alter the workplace.
  • There is barely a field of human endeavor that AI doesn’t have the potential to impact. As AI expert Andrew Ng puts it: “many people are doing routine, repetitive jobs. Unfortunately, technology is especially good at automating routine, repetitive work”, saying he sees a “significant risk of technological unemployment over the next few decades”.
  • The evidence of which jobs will be supplanted is starting to emerge. Amazon has just launched Amazon Go, a cashier-free supermarket in Seattle where customers just take items from the shelves and walk out. What this means for more than three million people in the US who works as cashiers remains to be seen. Amazon again is leading the way in using robots to improve efficiency inside its warehouses. These robots carry shelves of products to human pickers who select items to be sent out. Amazon has more than 100,000 bots in its fulfillment centers, with plans to add many more. But Amazon also stresses that as the number of bots has grown, so has the number of human workers in these warehouses. However, Amazon and small robotics firms are working to automate the remaining manual jobs in the warehouse, so it’s not a given that manual and robotic labor will continue to grow hand-in-hand.
  • Fully autonomous self-driving vehicles aren’t a reality yet, but by some predictions the self-driving trucking industry alone is poised to take over 1.7 million jobs in the next decade, even without considering the impact on couriers and taxi drivers.
  • Yet some of the easiest jobs to automate won’t even require robotics. At present there are millions of people working in administration, entering and copying data between systems, chasing and booking appointments for companies. As software gets better at automatically updating systems and flagging the information that’s important, so the need for administrators will fall.
  • As with every technological shift, new jobs will be created to replace those lost. However, what’s uncertain is whether these new roles will be created rapidly enough to offer employment to those displaced, and whether the newly unemployed will have the necessary skills or temperament to fill these emerging roles.
  • Not everyone is a pessimist. For some, AI is a technology that will augment, rather than replace, workers. Not only that but they argue there will be a commercial imperative to not replace people outright, as an AI-assisted worker — think a human concierge with an AR headset that tells them exactly what a client wants before they ask for it — will be more productive or effective than an AI working on its own.
  • Among AI experts there’s a broad range of opinion about how quickly artificially intelligent systems will surpass human capabilities.
  • Oxford University’s Future of Humanity Institute asked several hundred machine-learning experts to predict AI capabilities, over the coming decades.
  • Notable dates included AI writing essays that could pass for being written by a human by 2026, truck drivers being made redundant by 2027, AI surpassing human capabilities in retail by 2031, writing a best-seller by 2049, and doing a surgeon’s work by 2053.
  • They estimated there was a relatively high chance that AI beats humans at all tasks within 45 years and automates all human jobs within 120 years.

 

VBA Data Types

0

                                     VBA Data Types

When you declare a variable, you should also identify its data type. You’re probably already very familiar with data types because you assign data types to table fields. VBA uses the same data types to define a variable.

To name the variable in VBA, you need to follow the following rules.

  • It must be less than 255 characters
  • No spacing is allowed
  • It must not begin with a number
  • Period is not permitted

 

The most important job of a data type is to ensure the validity of your data. Specifying a data type won’t keep you from entering an invalid value, but it will keep you from entering an invalid type. If you omit the data type, VBA applies the Variant data type to your variable—it’s the most flexible and VBA won’t guess at what the data type should be. Below Table compares VBA’s many data types.

 

VBA Data Type Comparison

 

Data Type or Subtype Required Memory Default Value VBA Constant Range
Integer 2 bytes 0 vbInteger –32,768 to 32,767
Long Integer 4 bytes 0 vbLong –2,147,483,648 to 2,147,486,647
Single 4 bytes 0 vbSingle –3402823E38 to –1.401298E–45 or 1.401298E–45 to 3.402823E38
Double 8 bytes 0 vbDouble –1.79769313486232E308 to –4.94065645841247E–324 or 1.79769313486232E308 to 4.94065645841247E–324
Currency 8 bytes 0 vbCurrency –922,337,203,477.5808 to 922,337,203,685,477.5807
Date 8 bytes 00:00:00 vbDate January 1, 100 to December 31, 9999
Fixed String String’s length Number of spaces to accommodate string vbString 1 to 65,400 characters
Variable String 10 bytes plus the number of characters Zero- length string (“”) vbString 0 to 2 billion characters
Object 4 bytes Nothing (vbNothing) vbObject Any Access object, ActiveX component or Class object
Boolean 2 bytes False vbBoolean –1 or 0
Variant 16 bytes Empty (vbEmpty) vbVariant Same as Double
Decimal 14 bytes 0 vbDecimal -79,228,162,514,264,337,593,543,950,335 to 79,228,162,514,264,337,593,543,950,335 or –7.2998162514264337593543950335 to 7.9228162514264337593543950335
Byte 1 byte 0 vbByte 0 to 255

 

 

Excel VBA Data-Types

 

Computer cannot differentiate between the numbers (1,2,3..) and strings (a,b,c,..). To make this differentiation, we use Data Types.

 

VBA data types can be segregated into two types

 

  • Numeric Data Types
Type Storage Range of Values
Byte 1 byte 0 to 255
Integer 2 bytes -32,768 to 32,767
Long 4 bytes -2,147,483,648 to 2,147,483,648
Single 4 bytes -3.402823E+38 to -1.401298E-45 for negative values 1.401298E-45 to 3.402823E+38 for positive values.
Double 8 bytes -1.79769313486232e+308 to -4.94065645841247E-324 for negative values 4.94065645841247E-324 to 1.79769313486232e+308 for positive values.
Currency 8 bytes -922,337,203,685,477.5808 to 922,337,203,685,477.5807
Decimal 12 bytes +/- 79,228,162,514,264,337,593,543,950,335 if no decimal is use +/- 7.9228162514264337593543950335 (28 decimal places)
  • Non-numeric Data Types
Data Type Bytes Used Range of Values
String (fixed Length) Length of string 1 to 65,400 characters
String (Variable Length) Length + 10 bytes 0 to 2 billion characters
Boolean 2 bytes True or False
Date 8 bytes January 1, 100 to December 31, 9999
Object 4 bytes Any embedded object
Variant(numeric) 16 bytes Any value as large as Double
Variant(text) Length+22 bytes Same as variable-length string

In VBA, if the data type is not specified, it will automatically declare the variable as a Variant.

 

Approaching of Business via Blockchain

0

For almost 25 years, the concept of a cryptocurrency and electronic cash has been studied and improvements are continually being made. The big breakthrough came when Bitcoin was released in January 2009. Bitcoin is a form of electronic cash without a central bank or single administrator. Why did it take so long to create a cryptocurrency that worked? Bitcoin and other cryptocurrencies finally succeeded because the founder of Bitcoin, Satoshi Nakamoto (a still unknown entity), also created the network cryptocurrencies exist on called Blockchain.

 

Electronic currencies, until the existence of Bitcoin, had an intermediary or government oversight. Companies, such as Visa or PayPal, all work for payment because there is a third-party tracking payment, funds available and other means of business. Blockchain changed a currencies’ need for third party verification. The Blockchain created a peer-to-peer network without the need for a central bank or administrator.

 

To understand the potential for Blockchain, you must first have a rudimentary idea of how it works and why it is so powerful. Blockchain is a shared, unchangeable, digital ledger that facilitates the process of recording transactions and tracking assets. The assets can be tangible or intangible and virtually anything of value can be tracked. At its core, Blockchains are permanent and unalterable records of every transaction.

 

For owners of Bitcoin and other cryptocurrencies, Blockchain permitted all anonymous owners to view the entire ledger and verify each other’s transactions and holdings. Changes in the Blockchain cannot be tampered with and must be verified through cryptographic proof. I imagine many people reading this article and saying, “cryptographic.

 

Cryptography is the writing or solving of codes. It allows for storing and transmitting data in a form so that only those, for whom it is intended, can read and process it. This means that cryptography is part of what allows the Blockchain to be secure, private and tamper-free.

 

Blockchain technology helped with the emergence of Bitcoin but its potential for business changes is explosive. For instance, in my industry, verification of assets and security is of utmost importance. An investor does not want to pay $5,000 for shares of Apple if the seller doesn’t own them.

 

Blockchain will also allow for a leap in security, risk reduction, trust and transparency. It will decentralize trading and allow for settlement on trades in mere minutes by allowing for instant verification. This helps with the cost of trading and the speed of money through financial system.

 

Another industry that will take leaps forward in time and costs is supply chain management. Blockchain will permit verification of shipments, better tracking and cut down on paperwork needed to track shipments.

 

Auditing also becomes simplified by using the Blockchain in business. As soon as a transaction is complete, and the chain is added to the block and it has been audited by all members of the Blockchain, there is no need to go back to verify or audit the transaction.

 

Another industry that can take advantage of Blockchain technology are car dealers. Car company leasing is cited in the IBM publication “Blockchain for Dummies,” which I highly recommend (The publication is free for download on IBM’s site.)  As cited in the IBM publication, a Blockchain ledger can be used by authorized participants to “access, monitor, and analyse the state of a vehicle.” A closed network such as this would allow for cost and time savings.

 

There are many amazing opportunities being created using Blockchain. Estimates put Blockchain at the point the Internet was in 1990 and it will be interesting to see how it progresses and if it makes as much of an impact as the internet.

 

 

Useful Websites

0

 

  1. INDIA’S #1 SITE FOR COMPETITIVE EXAM PREPARATIONThis site is made by the students of the University of Mumbai, where all the study material needed to prepare for the students who want to prepare for the various competitive exams like U.P.S.C, M.P.S.C, I.B.P.S, P.S, S.S.C, N.D.A, C.D.S, R.B.I, Railways, etc. is provided free of cost.

 

  1. Finance GYM is a fun workout that helps you Grow Your Money. We use a gamified approach to teach personal finance. You will learn about investing and money management with the help of online tutorials and games. Finance GYM has been covered by leading publications including Business India and Economic Times.

 

 

Top 6 Trends in Data Science

0

Below are the Top Six data science trends to

follow in 2018

1. Artificial Intelligence (AI)

Artificial intelligence (AI) is an area of computer science that highlights the formation of intelligent machines that work and react like humans. It is frequently applied to the project of developing systems provided with the intellectual processes characteristic of humans, which includes the ability to reason, discover meaning, or learn from past experience. Artificial Intelligence has become the key focus area for many start-ups and big organizations.

Artificial Intelligence Market will strike at a CAGR (Compound Annual Growth Rate) of 47.50% by 2021. The top contenders in this field are Intel Corporation, Google Inc., IBM Corporation, Amazon Web Services, Apple, Facebook Inc., General Electric and more.

 

2. Machine Learning (ML)

Machine learning is the area of computer science that gives computer systems the ability to learn with Data without being explicitly programmed. It is an application of artificial intelligence (AI) that provides systems the ability to automatically learn and improve. The main goal of Machine Learning is to allow the computers learn automatically without human intervention or assistance and adjust actions accordingly. It will continue to expand at ever greater rates being the main focus area with Artificial Intelligence (AI) for many start-ups and business.

The machine learning market scope is assumed to rise from USD 1.41 Billion in 2017 to USD 8.81 Billion by 2022, at a Compound Annual Growth Rate (CAGR) of 44.1%. Technological improvement and production of data generation are some of the major driving factors for global machine learning market. Example of companies using Machine Learning (ML) in cool ways like Facebook uses  Chatbot Army, Twitter uses for Curated Timelines, Google in  Neural Networks and ‘Machines That Dream’ and more.

 

3. Big Data

The term Big Data has been around for some time now, but there is still quite a lot of confusion about what Big Data actually means. Big Data is an extremely large volume of Data that may be analyzed computationally to reveal patterns, trends, and associations, which overpowers a business on a day-to-day basis. Big data is being analyzed for insights that lead to better strategic decisions for the business that helps organizations to make better decisions. The big data business is assumed to grow from USD 28.65 Billion in 2016 to USD 66.79 Billion by 2021, at a high Compound Annual Growth Rate (CAGR) of 18.45%

4. Blockchain

The blockchain is originally a block of a chain which is continuously growing a chain of records which are linked and secured by using cryptography. It will be an important technology for businesses around the world soon. Block-chain will be in very much demand for many industry sectors like the health-care and finance.

The global block-chain market size is expected to grow to USD 7,683.7 million by 2022. The primary extension drivers of the market cover the rising market for distributed ledger technology, decreased the cumulative cost of ownership, rising crypto currencies market cap, and initial coin offerings, increasing the need for analyzed business processes and building transparency and immutability, quicker sales and rising adoption of Block-chain-as-a-Service.

 

5. Edge Computing

Edge computing provides data presented by the internet of things (IoT) devices to be treated nearby to where it is generated instead of sending it across long paths to storage warehouse. This helps organizations to analyze the Data soon without wasting much of its time. It will be needed in the field of health-care, manufacturing, retail, and finance.

The edge computing market size is expected to grow from USD 1.47 Billion in 2017 to USD 6.72 Billion by 2022. Increasing pressure on the cloud infrastructure, a broad area of applications in various industries, and a boost in the whole of intelligent applications are the driving circumstances for the edge computing market.

 

6. Digital Twin

Digital Twin Technology is one of the top 10 strategic technology trends named by Gartner Inc. in 2017. It is a virtual representation of a process, product or service. This combination of the virtual and physical assets allows analysis of data and observing the systems to prevent problems and before they even occur, and also develop new opportunities for the future by using simulations.

The Digital Twin market is anticipated to expand at a CAGR of 37.87% through the projection period, to touch USD 15.66 Billion by 2023. The extension of the technical Internet revolution is whole of the chief trends that will obtain stress in the digital twin market. Industrial Internet-of-things (IIoT) is a blend of big data Analytics and IoT that promotes industrial growth.

 

Conclusion

The 2018 year is going to be very interesting and plentiful of fresh discoveries and developments in the Data Science field. All these applications are going to be mainstream and contribute to every bit of organization and business areas and becoming one of the essential competitive benefits for companies.

 

Recent Posts

   © All Rights Reserved                                                           Hosted By ScanVerify.com Trust Seal Marketed By ScanVerify.com Trust Seal