Brian Aldiss was an English writer best known for science fiction novels and short stories. Probably the most famous one, “Supertoys Last All Summer Long”, first published in the UK edition of Harper's Bazaar in December 1969, tells a story of a boy who cannot please his mother despite giving his best in every way he can. Turns out that he doesn’t realize that he is an android living in an age of intelligent machines and loneliness endemic in an overpopulated future where child creation is controlled.

Super bots Last All Seasons Long

A movie buff might find this story familiar. The short story was used as the basis for the first act of the feature film “A.I. Artificial Intelligence”, directed by Steven Spielberg and released in 2001. The plot broaches other issues like global warming but the main subject is the capability of machines experiencing love. In a world where machines are capable of processing complex thoughts, have emotions seems to be the missing puzzle to become human.

Science fiction has always been a subject that humans liked to explore and when it comes to where the technology will drive us the creativity goes far away. But sometimes it looks like a sort of prediction and as corny as it can be, the fiction comes very close to reality. Although we didn’t reach a point of creation of concepts like theory of mind and self-awareness in AI yet, AI is a technology that is already a reality and also a trend for the next years. Companies and governments are heavily investing in enhancing the technology, the development obtained so far is astonishing, and concerns about security and ethical usage of the technology are being deeply debated.

Artificial intelligence is a simulation of human intelligence by computer systems that can learn, perceive variables, reasoning to make decisions and solve problems and correct itself. From a preprogrammed code, variables are taken into account, the data is processed and determine what to do in each situation. The AI system can be designed and trained for particular tasks like a virtual personal assistant or can be more complex and strong with generalized human cognitive abilities sometimes being able to find solutions without human intervention, such as Self-driving cars.

It’s important to realize that AI systems are always incorporated into a variety of different types of technology. A spam blocker is no more a tech innovation but has more to do with AI than you can imagine. It looks at the subject line and text of an email and decide if it’s junk. Natural Language Processing is the technology behind it where a computer is capable of processing human language. Current approaches to NLP are based on Machine Learning and can do more complex tasks like text translation, sentiment analysis, and speech recognition.

Machine Learning, which may be the hippest term of the moment is also a technology that maintains close ties with AI. It’s own definition fuses with the definition of AI: The science of getting a computer to act without programming. So Machine Learning nowadays can detect patterns from a previous labeled data set, can sort data according to data sets that aren’t labeled and can also give feedback after performing an action or several actions. In other words, current systems may include machine learning capabilities that allow them to improve their performance based on experience, just as humans do.

These are a few examples of technologies that enable AI to work and provide intelligent systems. Automation, machine vision, and robotics are some of the others. All of them constantly developing, sometimes in combination, to create new solutions to human problems.

Automation has been an industry tonic for many decades. And the machines keep getting smarter. With AI, there is equipment that manufactures and checks products without having to be operated by a human. Chatbots and systems with Natural Language Processing are becoming smarter to replace human attendants and being available to answer users’ questions 24 hours a day. AI in education can automate grading, giving educators more time. AI can assess students and adapt to their needs, helping them work at their own pace. Ai in finance can collect personal data and provide financial advice. Ai in online retail can recognize user buying patterns to present them with offers according to their preferences. 

In the communication area, there are programs, with access to databases, that can write informative news stories in a way that makes it difficult for the reader to distinguish them from texts written by humans. Maybe this one.

While AI tools present a range of new functionality for businesses, the use of artificial intelligence raises ethical questions. All the programming and algorithms defined are done by humans and who is building this structure and controlling it is a crucial matter of security, trust and where we want to get. Deep learning algorithms, which underpin many of the most advanced AI tools, are only as smart as the data they are given in training. Because a human selects what data should be used for training an AI program, the potential for human bias is inherent and must be monitored closely.

The standards we use to collect data and turn it into algorithms are nothing but our built-in opinions, such as chatbot Tay, which Microsoft launched in 2016, in less than 24 hours has become an advocate for Incestuous sex and admirer of Hitler for using as a basis of his answers teenagers dialogues. For this and other reasons, the application of ethics is not limited to the standards we use, but must also reach those behind the scenes, the people that standardize the information.

Unless supported by ethics, artificial intelligence becomes a carrier for codifying human prejudice, directly affecting those who have access to the Internet and digital devices. We should question what criteria are defined by algorithms at the time of job selection or bank loan. Or how robots draw connections between "suspects" in digital surveillance. 

The Financial Times recently released an article about the usage of facial recognition systems by companies and governments around the world. At the same time the Chinese telecoms company Huawei boasts that its cameras led to a 46 percent drop in the regional crime rate in 2015 some critics believe that companies are spinning their products to fit the political demands of African elites, for example.

At least 52 governments are using the technology to ensure security according to research by the Carnegie Endowment for International Peace. While the debate over the use of facial recognition in the EU and the US is focused on the privacy threat of governments or companies identifying and tracking people, the debate in China is often framed around the threat of leaks to third parties, rather than abuses by the operators themselves. 

Meanwhile, China’s surveillance industry is already moving on to the next frontier of computer image recognition: identifying people by the way they walk and trying to read their emotions.

There are a few regulations governing the usage of AI tools and where laws do exist, they typically regard AI in an indirect way. Some lending regulations require financial institutions to explain credit decisions to potential customers, limiting the extent to which lenders can use deep learning algorithms. Europe's GDPR puts strict limits on how enterprises can use consumer data, which impedes the training and functionality of many consumer-facing AI applications.

The future surrounding AI and its applications isn’t clear yet.  A broad and deep debate is crucial to define how the technology should be used to ensure the safety of those who use it, what are the purposes behind the applications and how the algorithms are reading the information and what interests involved. Some of the concerns may not even appear yet. There should still be new concerns about the use of AI that we have not yet been able to foresee, but we need to know and talk about what is emerging and may affect all.

Some pessimists believe in an apocalyptic world where machines will take over control of humans. Elon Musk, Tesla CEO, is among those who have publicly spoken out about the dangers of artificial intelligence turning against us. In "One Crew Over the Crewcoo's Morty", the third episode of the season four of the TV show Rick and Morty, he appears as Elon Tusk, a version of himself but with tusks instead of teeth. Briefly, avoiding to give spoilers, the episode addresses artificial intelligence where the character Rick uses a robot with programmed intelligence to execute a plan to deceive others, It turns out that in the end, the robot himself tries to execute a plan to deceive Rick.

Artificial intelligence is still something closely linked to popular culture, causing the general public to have unrealistic fears about artificial intelligence and improbable expectations about how it will change the workplace and life in general. And let’s get real most companies say they use AI so they can just get a better valuation, most are basic regression models. But it’s changing. Our role at Talle is to learn about it, to get opinions, discuss it, and decide what’s the best way to go. We are in the middle of building an ecosystem that has incredible opportunities to make our life improve but we all should be aware of the risks to reduce them, make them transparent and allow sustainable growth. Super bots will last. Let’s take the best of it.