The so-called intelligent endpoint has been achieved in human history a long time ago
The benchmark of artificial intelligence is the assistant of modern bosses/leaders, the maids/butlers beside nobles in modern times, and the eunuchs/prime ministers beside emperors in ancient times.
In addition to directly handling tasks assigned by the owner, they see what the owner sees, hear what the owner hears, and feel what the owner feels. Through objective environment and subjective standpoint, they understand the owner’s inner intention and provide personalized solutions.
The more accurate the personalized understanding is, the more reasonable the solution is, and the more intelligent it is.
Humans who say one thing and mean another
The picture is from the movie “Her” (2013), although this science fiction movie doesn’t look very hard-science, I really like it because it’s a great story about AI learning human emotions. In this movie, we can glimpse into the contradiction of expressing our own needs: we are unable to express what we truly want verbally (saying one thing and meaning another), and under such circumstances, it is not easy to repeatedly satisfy human needs.
I also need to mention that in addition to what is mentioned above, so-called intelligence is often deliberately displayed to appear intelligent, but we need to understand that the essence of it is coolness, not intelligence. However, consumers may not understand this, also because of saying one thing and meaning another, and deliberately pursue coolness.
Our goal is to create products, services, and experiences that are both cool and intelligent
We do not need to tell consumers that they are wrong. Instead, we should use this consumer psychology to our advantage, and set cool animations, appearance, and even exaggerated effects. We should satisfy consumers’ psychological needs and establish expectations before full use.
Every intelligent product inevitably encounters various accidents during use. Fortunately, we can use the expected effect of human irrationality to cultivate hardcore fans. In many cases, when consumers pay for intelligent products, everyone has different psychological costs based on the amount paid, and different degrees of bond are established. When the cost is high enough, consumers will have strong expectations. When we successfully present the final effect to consumers with a time difference during accidents, consumers will become more willing to help improve the experience of intelligent products and thus improve their own experience, and will voluntarily recommend this cool and intelligent product, service and experience to their friends and partners.
Obviously, the car is not the only scenario. Various types of sensors are collecting our information in all aspects of life
Sensors all over the car collect various types of information about the owner, encrypt and upload it to the cloud for processing, and provide suggestions and personalized execution of tasks.
A simple artificial intelligence self-learning closed loop: by capturing the owner’s actions and expressions when they see a suggestion and rating the execution of similar tasks based on this, machine learning is continuously carried out to improve the accuracy of suggestions and the effectiveness of execution.In addition, we should not separate the scenarios of users inside and outside the car. Only performing so-called intelligent interaction in the car is equivalent to a virtual co-pilot. If it doesn’t know about the owner’s life outside of driving, the support it can provide is actually limited. We should envision a virtual assistant that should follow the user wherever they go.
Protecting user privacy is not as difficult as imagined.
Responsible companies should transform each collected information into untraceable anonymous information. Otherwise, being nonchalant is akin to being malevolent.
This article is a translation by ChatGPT of a Chinese report from 42HOW. If you have any questions about it, please email bd@42how.com.