logo-dark

Welcome to our blog.


Artificial intelligence

What is artificial intelligence?

Artificial intelligence (AI) involves using hyper-intelligent algorithms to enable machines to learn, understand, and act accordingly. Artificial intelligence is quickly becoming a partner to humanity. It influences our lives and work, creating possibilities long thought of as impossible. Autonomous vehicles, facial recognition, medical diagnostics, human-like robots, digital assistants – we encounter artificial intelligence almost everywhere, and its potential is enormous.

Since its beginnings in the mid-1950s, artificial intelligence has developed to such a great extent that experts now believe that two-thirds of all future technologies will utilize AI in some way. And the trend is just now coming into full swing. AI has developed more in the past five years than in the 50 years before that. According to CB Insights, investments in start-ups in the field of artificial intelligence increased tenfold between 2012 and 2016 alone.

AI will change our everyday lives. But what will those changes look like? What are our opportunities? Where do possible risks lie? How will we live tomorrow? You can find the answers to these questions in this column on artificial intelligence.

Megatrend

According to PWC, AI will increase the global GDP by 14 percent by 2030, contributing 15.7 trillion U.S. dollars. For 72 percent of all business leaders, this clearly makes AI a ‘competitive business advantage.’ Gero Nießen, Director at the advisory Willis Towers Watson, refers to artificial intelligence as “the largest economic transformation the world has ever seen.”

Milestones

In 1997, the computer Deep Blue won a chess match against world champion Garry Kasparow. It was a huge breakthrough for artificial intelligence in competition with human opponents. Further notable wins would follow, such as at the game Scrabble and even on the television show Jeopardy! The definitive victory for AI was winning a match of what experts consider to be the most complicated board game: Go. The computer AlphaGo defeated one of the world’s best Go players in 2016.

The future of work

Experts consider the fear that millions of jobs will be lost to artificial intelligence to be unfounded. Fewer than five percent of jobs, for example, are defined in a way that makes them able to be completely automated, according to Michael Chui, a partner in McKinsey’s research division. However, the focus of many jobs is shifting. “There are things that machines can’t do as well as humans, such as interacting with other people and doing creative work,” says Chui.

Blockchain technology explained

Chain Reaction – 8 Minutes Reading

It’s a revolution which begins unnoticed. Distributed Ledger Technologies (DLT) such as Blockchain are a concept known to many people as the technology behind the cryptocurrency Bitcoin. But their potential to redefine how we do business and also redesign our business structures remains unclear to many. The “Chain reaction” series looks behind the hype and provides answers to the most important questions.

What precisely are DLT?

DLT are decentralized, digitally managed ledgers. By the capacity to distribute information with a high level of transparency and security, DLT have really refined the internet.

Distributed Ledgers are basically a collaboration model which is based on an old idea: the cooperative system. DLT transfer this system into the digital world. A platform does not belong to a company which can exploit monopoly structures, but preferably belongs to the users of the system. So the area of application of distributed ledgers goes far beyond digital currencies.

The video explains the technology behind Distributed Ledger technologies such as Blockchain and the steps towards an Economy of Things.

Where are the areas of application of DLT?

Potential areas of application are generally those which are built on confidence and consensus, because information such as, for example, contract details, transactions or data, are secured decentrally in a distributed ledger. Every node of the network contains data records.

This creates a system which is difficult to attack or manipulate. At the same time, all information is shared and hence is visible in the network. This transparency makes cyber-attacks even more difficult.

What does it mean?

The most important terms from the world of distributed ledger technologies explained.


Bitcoin

The best-known digital currency shares the process of electronic creation and singular custody in the digital world with other cryptocurrencies. Cryptocurrencies such as Bitcoin are not regulated centrally or by governments, and are mined decentrally by a network with computer calculation power following a mathematical algorithm (→Proof of Work).

Ethereum

The distributed ledger as a platform allows the creation, management and execution of digital programs such as, for example, →Smart Contracts using a blockchain technology. Qreatiq is exploring the Ethereum platform with projects such as self-charging and self-paying cars at charging points.

Consensus process

Consensus processes are designed to protect against manipulation. In a consensus process, the network data are exchanged automatically when a transaction is to be recorded. Each DLT works with its own consensus-finding mechanism. In the case of Blockchain, which forms the basis of the cryptocurrency Bitcoin, a transaction is only validated and its value attached to an existing block if the majority of connected users signal that the data within the block are identical. (See →Proof of Work, →Proof of Stake, →Second Layer)

P2P

P2P stands for “Peer-to-Peer”. In a “Peer-to-Peer” system, all participants interact directly with each other with equal authority. They use and simultaneously provide the services of the network. So there is no need for a third party to perform checks, such as, for example, a bank or notary.

Proof of Work (PoW)

Proof of Work guarantees the security of the system by resource-intensive transmission and validation of a multiplicity of data between the different users in the system. The proof of work is defined by the extensive calculation of complex computation tasks — difficult to perform but simple to check. This mining leads to a trustworthy, decentralized consensus and at the same time, by creating digital currencies (mining), guarantees the reward for these miners. The system can only be manipulated if a user owns more than 50% of the computing power and their system permanently works faster than the systems of the other users. It is impossible to fulfil these two conditions together. As well as the PoW process, there are further methods of ensuring consensus (→Proof of Stake, →Proof of Authority, →Second Layer).

Proof of Stake (PoS)

PoS is an alternative consensus-finding mechanism to the “classic” →Proof of Work. The probability of block generation by a user here rises with their value share of the network. The advantages of PoS: less computing power is used, block generation in the blockchain is faster, transaction speed is increased. Disadvantage: less secure than PoW.

Proof of Authority (PoA)

PoA is an optimized →Proof of Stake model in which identity replaces shares in the network. So-called validators are here selected by reputation, which requires high standards and thorough checking in selection. The transaction speed rises in comparison with →PoW. In theory, users have better control of their data at the same time. The disadvantage of better control is that some of the benefits of decentralization are lost.

Second Layer

Second Layer protocols attempt to bypass the resource-intensive processes of →Proof of Work. In other words, a blockchain system forms the basis for a second layer which only very rarely talks to this slow, very complex and very secure layer. This system behaves like a framework contract. The framework contract applies to the more expensive and slow system. In the case of flexible requirements which call for less security, individual processes can run on a second system (the second layer) which only rarely carries out back-checking. So the system achieves greater cost efficiency and more transactions.

Smart Contracts

These algorithmic contracts have predefined conditions, so they can automatically trigger actions if these conditions are fulfilled. They form the basic structure for the performance of contracts from machine to machine — and are just the beginning of a new development towards so-called Decentralized Autonomous Organizations (DAO). These are based on DLT and are autonomously performing organizations which are guided by algorithms without human supervision. The →Ethereum platform allows such projects, including for example, in the case of Qreatiq applications, the autonomously charging and paying EV.

How do DLT promote the Economy of Things?

Thanks to their decentralized, secure, transparent, and automated transaction potential, DLT allow so-called “Smart Contracts”. These are algorithmic contracts for transactions concluded between machines — without any human intervention. The resulting transactions are autonomously executed by algorithms. In such an Economy of Things, DLT create confidence, ensure fairness and consensus, and hence enable cross-industry value added networks.

4 steps towards an Economy of Things

Where is the Economy of Things already applied?

One example of an application is a project which Qreatiq has set up together with german energy supplier EnBW.

In this case, an electric vehicle negotiates prices or concludes a contract directly with a charging station. The user merely enters in the on-board computer how much money he is prepared to pay and how far the battery level may be run down. The rest is done by a digital agent in the vehicle which negotiates the transaction with the charging stations.

The system is open to any player worldwide and belongs to all users of the system. This infrastructure, implemented for the benefit of society, is an example of how distributed ledger technologies enable the old cooperative concept to be applied in the digital age.

From Turing to Watson: The development of thinking systems

2018-01-30

Scientists have been working on artificial intelligence since the middle of the last century. Their goal: To develop machines that learn and think like humans. Here is an overview of the key learnings and technological milestones they have reached.

1936: Turing machine

The British mathematician Alan Turing applies his theories to prove that a computing machine — known as a ‘Turing machine’ — would be capable of executing cognitive processes, provided they could be broken down into multiple, individual steps and represented by an algorithm. In doing so, he lays the foundation for what we call artificial intelligence today.

1956: The history begins: the term ‘AI’ is coined

In the summer of 1956, scientists gather for a conference at Dartmouth College in New Hampshire. They believe that aspects of learning as well as other characteristics of human intelligence can be simulated by machines. The programmer John McCarthy proposes calling this ‘artificial intelligence.’ The world’s first AI program, ‘Logic Theorist’ — which manages to prove several dozen mathematical theorems and data — is also written during the conference.

1966: Birth of the first chatbot

The German-American computer scientist Joseph Weizenbaum of the Massachusetts Institute of Technology invents a computer program that communicates with humans. ‘ELIZA’ uses scripts to simulate various conversation partners such as a psychotherapist. Weizenbaum is surprised at the simplicity of the means required for ELIZA to create the illusion of a human conversation partner.

1972: AI enters the medical field

With ‘MYCIN’, artificial intelligence finds its way into medical practices: The expert system developed by Ted Shortliffe at Stanford University is used for the treatment of illnesses. Expert systems are computer programs that bundle the knowledge for a specialist field using formulas, rules, and a knowledge database. They are used for diagnosis and treatment support in medicine.

1986: ‘NETtalk’ speaks

The computer is given a voice for the first time. Terrence J. Sejnowski and Charles Rosenberg teach their ‘NETtalk’ program to speak by inputting sample sentences and phoneme chains. NETtalk is able to read words and pronounce them correctly, and can apply what it has learned to words it does not know. It is one of the early artificial neural networks — programs that are supplied with large datasets and are able to draw their own conclusions on this basis. Their structure and function are thereby similar to those of the human brain.

1997: Computer beats world chess champion

The AI chess computer ‘Deep Blue’ from IBM defeats the incumbent chess world champion Garry Kasparov in a tournament. This is considered a historic success in an area previously dominated by humans. Critics, however, find fault with Deep Blue for winning merely by calculating all possible moves, rather than with cognitive intelligence.

2011: AI enters everyday life

Technology leaps in the hardware and software fields pave the way for artificial intelligence to enter everyday life. Powerful processors and graphics cards in computers, smartphones, and tablets give regular consumers access to AI programs. Digital assistants in particular enjoy great popularity: Apple’s ‘Siri’ comes to the market in 2011, Microsoft introduces the ‘Cortana’ software in 2014, and Amazon presents Amazon Echo with the voice service ‘Alexa’ in 2015.

2011: AI ‘Watson’ wins quiz show

The computer program ‘Watson’ competes in a U.S. television quiz show in the form of an animated on-screen symbol and wins against the human players. In doing so, Watson proves that it understands natural language and is able to answer difficult questions quickly.

2018: AI debates space travel and makes a hairdressing appointment

These two examples demonstrate the capabilities of artificial intelligence: In June, ‘Project Debater’ from IBM debated complex topics with two master debaters — and performed remarkably well. A few weeks before, Google demonstrated at a conference how the AI program ‘Duplex’ phones a hairdresser and conversationally makes an appointment — without the lady on the other end of the line noticing that she is talking to a machine.

20xx: The near future is intelligent

Decades of research notwithstanding, artificial intelligence is comparatively still in its infancy. It needs to become more reliable and secure against manipulation before it can be used in sensitive areas, such as autonomous driving or medicine. Another goal is for AI systems to learn to explain their decisions so that humans can comprehend them and better research how AI thinks. Numerous scientists, such as Qreatiq-endowed professor Matthias Hein at the University of Tübingen, are working on these topics.