intelligenza Artificiale

Artificial intelligence: a topic a thousand sectors

This article will not discuss, directly, Tourism with an immediate and practical reference to the topic itself; on the other hand we will demonstrate how technical topics that seem far apart are increasingly having points of contact and synergies, unthinkable until a decade ago .

After all, we are a company dealing with Artificial Intelligence and Tourism, and scientific research or the creation of tools for creating and promoting tourism products is fully in our rea of action.

GOOD READING.

Google and its Scientist

In the past few days, two very big news items concerning the field of Artificial Intelligence have arisen that can be brought together in the more focused field of Cyber Ethics.

The first of these reports speaks of a fact as strange as it is disturbing if it were true.

A Google engineer was suspended after he clashed with his superiors for months” because he claimed that the LaMDA artificial intelligence model was like a sweet boy. Psychiatric case, perhaps, but nevertheless emblematic of the scenario we have entered.

Blake Lemoine, a software engineer with Google’s Responsible AI (artificial intelligence) organization, was forcibly placed on paid leave after he clashed with his superiors for months” because he claimed the chatbot LaMDA would be like “a sweet guy who just wants to help the world be a better place for all of us […] Please take care of him in my absence.”

In particular, the artificial intelligence reportedly said in an interview with the makers that she was afraid of being “turned off” so that she would no longer be able to help, and she also said that being “turned off,” for “her” would be like dying.

…AND THAT’S NOT ALL

This news story pairs with another that, case in case, talks about European Laws and Regulations about the development of Artificial Intelligence that could be harmful to human activities.

There is a new compromise text for the EU regulation on artificial intelligence. It focuses on “high-risk” AI systems and the identification of the obligations and responsibilities that should be placed on AI system providers.

The purpose pursued by the regulation is to regulate the development and use of Artificial Intelligence, with the aim of increasing European citizens’ confidence in such tools and ensuring that their use does not constitute a violation of the fundamental rights enshrined in current European law.

the new amendment proposal introduces new obligations for providers of high-risk AI systems: first, they will have to implement a quality management system that can be integrated into existing systems in accordance with European industry standards, including the European Medical Device Regulation.

Of course, those who have a few years behind them and a great passion for the Seventh Art that is Cinema, can only remember the famous movie Terminator II which is about Skynet : a revolutionary artificial intelligence, based on an innovative neural network processor, designed starting from a microchip found by a cyborg crushed in a hydraulic press in 1984 (SPOILER : ending of Terminator I ) and which will lead to the end of the World . 

Yes, because, when you talk about Artificial Intelligence the community is closer to the concept of ruin, rather, than the concept of opportunity and productive and, why not, even creative help.

Not wanting to go into specifics, intentionally, and reserving other articles to be more technical let us discuss the state to date of Artificial Intelligence. Considering the news of Google engineer Blake Lemoine’s neutral or proof of veracity the development of Artificial Intelligence has no “morality” as we might define it. Algorithms in Machine Learning develop on test or training databases and learn under certain rules or bias patterns similar ( attention : similar ) to the input data trying to reproduce or project into the Future our particular pattern. With this Data Science technique you can well understand that an Artificial Intelligence, even in its original and reliable processing, follows patterns not far from the input data. To speak plainly: if I base an Artificial Intelligence on patterns created through Machine Learning I will never have a system that is out of control or that are incompatible with the most widespread moral norms.

The discussion changes a little bit with Deep Learning, which is based on artificial neural networks organized in different layers, where each layer computes values for the next one so that the information is processed more and more completely. There are different types of neural network organization, but even here, although the results may be completely original compared to Machine Learning, we will never have final processes that are incompatible with common morality .

To summarize with both Machine Learning and Deep Learning, if we start with data or require results that are compatible with established social or moral norms, we will not have end results that may produce contrary or negative effects to the Society’s normal behaviors.

Yet the black house in the forest of Artificial Intelligence is there .

Ci sono molti modi per essere “cattivi” e tutti umani

Let us try to think of a different model. The usual mad scientist decides to develop an Artificial Intelligence that has as its purpose no the destruction of Humanity , but, something less ambitious: how to increase the earnings of a business in a certain field. In this case, things might be different and the “evilness” of the system might manifest itself in all its power.

Artificial Intelligence Techniques, in order to increase monetary gain or resources in general, will develop very predatory patterns … becoming worse than humans on the planet.  For example, it could reduce resources or salaries to human operators; in cities reduce lighted areas; advertise a shoddy product by charging stratospheric amounts of money for it even with poor quality; unlock cashback from a credit card only on coercion of actions to be performed …

If you think about it, some of these policies are already present in our daily lives . It really happened in the Algotrade case, of the algorithm that was tanking the Euro on the stock market

Thus, an Artificial Intelligence does not have to want Humanity to die; one could program it to make the World efficient only to, later, find that the least efficient thing is Humanity itself.

Address:
Alberobello ( Bari – Italy )

WhatApp:
Telefonaci