Menu Close

Synthetic humans: Digital twins

Modern society is changing, and it is changing at a fast pace. As digital transformation changes society, it completely changes the world around us. Among these changes, which many individuals feel are more profound than any other shifts in human history, is the advent of synthetic humans and digital twins.

Synthetic humans and digital twins have been a hot topic in the field of artificial intelligence and technology in recent years. These innovative concepts have the potential to revolutionise the way we live and work and have sparked a lot of interest and debate among experts and the general public alike. In the past months, I have been working hard to create a digital twin of myself; a synthetic me that looks like me, talks like me, sounds like me and moves like me. Hopefully, at some point in the future, it will be able to replace me when delivering keynotes and workshops!

In recent years, the production of synthetic humans and digital twins has become a booming industry. With billions of dollars invested in the research of robots, synthetic humans and AI, the possibility of artificial intellect being created within the next couple of decades seems inevitable. The opinions on this subject vary widely across different organizations, governments and academics. Some believe synthetic humans are a direct threat to humankind that must be stopped at all costs. Others believe it will usher in a new era of wonder that will forever change the face of humanity.

But what exactly are synthetic humans and digital twins, and how do they differ from each other? How are they being used in the real world, and what are the ethical and philosophical implications of their creation and use?

In this article, we will explore these questions and provide a comprehensive introduction to synthetic humans and digital twins, covering their definitions, uses, and potential future developments.

History and Concept of Synthetic Humans

The concept of synthetic humans, or artificial humans, has a long history that dates back to ancient mythology and storytelling. In many cultures, stories of artificial humans or beings created through technology have been a popular theme, often used to explore ideas about the nature of humanity and the limits of science and technology.

One of the earliest examples of a synthetic human in literature is the ancient Greek myth of Pygmalion, who was a sculptor who created a beautiful statue of a woman and fell in love with it. In the story, the goddess Aphrodite brought the statue to life, turning it into a real woman named Galatea. This myth is often cited as an early example of the idea of a synthetic human or android, as it involves the creation of a being that is indistinguishable from a real human.

In more recent times, the concept of synthetic humans has continued to captivate the imagination of science fiction writers, artists, and filmmakers. In many science fiction stories, synthetic humans are depicted as advanced androids or robots that are designed to resemble and behave like real humans and are often used for tasks that are too dangerous or undesirable for humans to perform. Some stories portray synthetic humans as having superhuman abilities or being more intelligent than humans, while others depict them as being indistinguishable from humans in every way except for their lack of emotions or ability to feel pain.

As an example, we have the 1968 science fiction novel “Do Androids Dream of Electric Sheep?” by Philip K. Dick. This book tells the story of a bounty hunter tasked with tracking down and “retiring” rogue androids in a future society where synthetic humans are common. The main character, Rick Deckard, comes to question the true nature of humanity as he confronts the androids, which exhibit a range of emotions and behaviours that are indistinguishable from those of humans.

Another great story is the book “Altered Carbon” written by Richard K. Morgan. In this book, humans have the ability to transfer their consciousness into synthetic bodies, or “sleeves,” allowing them to effectively live forever. The main character, Takeshi Kovacs, is a former soldier whose consciousness has been revived in a new sleeve after centuries of suspension. Kovacs becomes embroiled in a dangerous conspiracy involving a wealthy businessman, and his combat and investigative skills, combined with his advanced sleeve technology, make him a formidable opponent. The book has been turned into a two-season Netflix series, which I can certainly recommend.

In the real world, there have been a few examples of synthetic humans or humanoid robots that have been developed by robotics and AI researchers. However, these machines are still far from being able to fully replicate the complexity of a human being, and the creation of a truly synthetic human remains in the realm of science fiction for now.

In recent decades, synthetic media research has gained a great deal of attention. This field was first explored during the 1950s and 1960s through algorithmic and generative experiments. With the advent of the World Wide Web, computational power began to grow again in the late 1980s and early 1990s. Bregler, Covell, and Slaney published a paper in 1997 entitled Video Rewrite: Driving Visual Speech with Audio. This research developed an innovative Video Rewrite program that combines all of these technologies (previous research had created increasingly convincing synthetic faces and sounds). Studies such as this were used to analyse popular Hollywood blockbusters, including Star Wars Episode II: Attack of the Clones (2002) and Spider-Man 2 (2004).

More recently, synthetic media (which refers to media that is generated by computer algorithms rather than being recorded or filmed with a camera) and generative AI (which refers to AI systems that are able to generate new content, such as text, music, or images, based on a set of input data) have taken the internet by storm due to their potential to create synthetic humans and digital twins, and both are likely to continue to shape the way we interact with technology in the future.

What is a Synthetic Human?

A synthetic human, also known as a synthetic person or an artificial human, is a hypothetical being created using artificial intelligence and technology. It is a being that is designed to resemble and behave like a human in every way, including physical appearance, personality, and intelligence.

The concept of synthetic humans has long captured the imagination of science fiction writers, artists, and filmmakers. In literature, films and television programs, synthetic humans—androids or robots that seem real to the point of possibly passing as human beings—are often depicted performing jobs too dangerous or unpleasant for people. In some narratives, synthetic human beings are portrayed as having superhuman abilities or being more intelligent than humans; in others, they appear to be identical except for their lack of emotions.

Despite the challenges, the possibility of creating synthetic humans has long fascinated scientists, philosophers, and the general public. It raises important questions about the nature of humanity, the limits of technology, and the ethical implications of creating beings that are so similar to humans (which we will cover later in this article). As technology continues to advance, it is likely that the discussion around synthetic humans will only become more relevant and pressing.

Now, let’s dive into what digital twins are.

What is a Digital Twin?

A digital twin is like a virtual doppelganger, a digital replica of a real-world object, system, or process. It is like having a copy of something in the digital world that is connected to its real-world counterpart.

Imagine you have a car. Now imagine having a digital version of that car that is an exact copy of the real one. You can see all the details of the car, from the engine to the paint colour, in the digital world. And as you drive the car in the real world, the digital twin is updated with all the data from the car’s sensors, such as its speed, fuel consumption, and tire pressure.

Digital twins are not just static copies of real-world objects; they are dynamic and interactive. They can be used to simulate, analyse, and optimise the performance of their real-world counterparts. For example, you can use the digital twin of your car to test different driving scenarios, such as driving on different types of roads or in different weather conditions. You can also use the digital twin to identify and fix problems with the car before they happen in the real world.

In addition, digital twins are being used in a variety of industries, including manufacturing, healthcare, and transportation, to improve efficiency, reduce costs, and mitigate risks. They are a powerful tool for understanding and optimizing complex systems and have the potential to revolutionise the way we live and work.

Difference Between Synthetic Humans and Digital Twins

Synthetic humans and digital twins are two terms that are often used interchangeably, but they actually refer to two different concepts.

Synthetic humans are computer-generated representations of humans that are designed to look and behave like real people. They can be used for a variety of purposes, including entertainment, education, and research. Digital twins, on the other hand, are digital representations of real-world objects, systems or processes. They are used to model and predict the behaviour of these systems and can be used in a variety of fields, including manufacturing, transportation, and healthcare.

In summary, synthetic humans are computer-generated representations of people, while digital twins are digital representations of real-world systems. While they are related, they are used for different purposes and are not interchangeable.

Use Cases of Synthetic Humans

Artificially Intelligent Non-Playing Characters

Artificially Intelligent Non-Playing Characters (AI NPCs) are characters in video games that are powered by artificial intelligence. These characters are able to interact with the player and make decisions on their own rather than simply following a predetermined set of instructions. This allows AI NPCs to behave in a more realistic and dynamic way, and they can make the gameplay experience more immersive and engaging.

AI NPCs can be used for a variety of purposes in video games, such as providing quests or tasks for the player to complete or serving as opponents or allies in combat. They can also be used to populate the game world, provide a sense of life and activity within the game, and create more realistic and dynamic gameplay experiences

Here are a few examples of AI NPCs in video games:

  1. GLaDOS (Portal series): GLaDOS is an AI NPC that serves as the main antagonist in the Portal series. She is a sarcastic and malevolent AI that guides the player through a series of puzzles but also tries to kill the player throughout the game.
  2. Ellen (The Last of Us Part II): Ellen is an AI NPC that appears in The Last of Us Part II. She is a member of a group of survivors and serves as an ally to the player. Ellen has her own set of motivations and behaviours, and can make decisions on her own during gameplay.
  3. Navi (The Legend of Zelda: Ocarina of Time): Navi is an AI NPC that appears in The Legend of Zelda: Ocarina of Time. She serves as a guide to the player, providing hints and directions throughout the game. Navi is able to fly around and interact with the player in a dynamic way, helping to create a more immersive gameplay experience.

It is difficult to predict exactly what the future will hold for AI NPCs, but their sophistication and realism will likely continue to grow. As artificial intelligence and machine learning techniques advance, AI NPCs will be able to behave in more complex and dynamic ways—allowing for additional layers of immersion during gameplay. Maybe, at some point in the future, AI NPCs will become truly advanced, as depicted in the great movie Free Guy.

Moreover, AI NPCs could grow to have applications outside of the video game environment. For example, they might be used in educational or training simulations; as virtual assistants or customer service representatives; and even for health care diagnoses (in conjunction with human doctors). As you can see, the future of AI NPCs is likely to be exciting and unpredictable.

Speaking of AI NPCs, Inworld.ai is a great platform that allows users to create and customise AI characters and integrate them into games, virtual worlds, or other immersive experiences. It seems to be designed to be an easy-to-use platform, with the goal of allowing users to build AI characters in minutes. Once an AI character has been created on the platform, it can be integrated into a variety of different contexts, such as games, virtual worlds, or other immersive experiences. The character’s dialogue, personality, and behaviour can be customised to fit the specific needs of the user or the context in which the character will be used.

This platform represents a pretty strong start for the emergence of new tools that allow users to create AI NPCs and make use of synthetic humans for different purposes, no matter what industry they belong to.

Training and simulation applications powered by AI NPCs

Training and simulation in different fields of engineering and computer science have been widely used in interactive simulation models in aviation, driving, and robotics. In such simulations, humans play an important role because they influence the simulated environment with their own actions.

The use of synthetic humans for training and simulation can offer a more immersive and realistic experience than traditional methods, such as computer simulations or role-playing. As part of such systems, humans interact continuously with the machine in order to train a model, which is then monitored and updated after it is deployed.

The AI model can be trained and tested more efficiently and with higher accuracy using this method, but it is also indispensable for deployment. It involves combining human and machine intelligence to achieve the best results over the long term. AI models deployed “in the wild” will likely encounter situations they are not prepared to handle due to under-representation or misrepresentation of their training data. In this case, humans will need to intervene to verify the AI model’s predictions and provide feedback either to replace the AI-generated forecast or to help fine-tune the model in the future.

Optimizing models and algorithms through human intervention and contribution to creating better and more accurate AI. It can be applied at various stages of the AI lifecycle:

  1. Training and testing: When the model is being trained, validated, and tested, humans can be involved in the process in order to accelerate the learning process. By demonstrating how tasks should be completed, humans can then provide feedback on their model’s performance. This can be achieved by either correcting the model’s outputs or evaluating them, resulting in a reward function that can be used for reinforcement learning. Traditional supervised learning algorithms are slower and less sample-efficient than learning from human demonstrations and evaluations.
  2. Deployment: When the training data is minimal, imbalanced, or uncomprehensive, human-in-the-loop workflows become increasingly important because we do not know whether the model is prepared to handle all possible edge cases. Moreover, even if the model is usually highly accurate, human monitoring and double-checking might be necessary if model errors turn out to be very costly: for instance, in content moderation of user-generated content where false negatives could result in irreparable damage. A labelling interface can be used in both cases where outputs below a given threshold of certainty are routed to be checked and verified by humans, either in real-time or in batches for future retraining.

Synthetic Human Training and Simulation Examples

Some people consider humanoid robots to be a type of synthetic human, as they are designed to mimic the appearance and behaviour of humans. However, it is important to note that there are some key differences between these two types—such as their ability for self-awareness or independent thought—that make them very different from one another.

While humanoid robots are physical, tangible objects, synthetic humans exist only in cyberspace. This means that the former can interact with their surroundings while the latter cannot. Synthetic humans have the advantage of being able to be accessed and interacted with from anywhere because they exist in a virtual world. Ideally, each humanoid robot has a digital twin in cyberspace to monitor, evaluate, and optimize it in real time.

That being said, the following examples below are related to humanoid robots specifically but could be considered examples of the potential of synthetic humans in tasks we perform on a daily basis in the physical world.

Advanced Disaster Medical Response Simulator (ADMRS)

One example of a synthetic human used for training is the Advanced Disaster Medical Response Simulator (ADMRS), developed by the National Institute of Standards and Technology (NIST). The ADMRS is a humanoid robot that is used to train first responders and healthcare workers in disaster response and emergency medical procedures. The robot is equipped with sensors and actuators that allow it to mimic the movements and responses of a real human, allowing trainees to practice procedures such as CPR and wound care in a simulated environment.

Haptix-1

Another example of a synthetic human used for training is the Haptix-1 robot, developed by the Georgia Institute of Technology. The Haptix-1 is a humanoid robot that is used to train surgeons in laparoscopic surgery, which involves making small incisions in the abdomen and using specialised instruments to perform procedures. The robot is equipped with haptic feedback, allowing surgeons to feel the resistance and movement of the robot’s instruments as they perform the procedures. In addition to training, synthetic humans or humanoid robots can also be used for simulation tasks, such as testing and evaluating the performance of products or systems.

NAO

NAO is a humanoid robot developed by SoftBank Robotics that has a range of capabilities and applications. One potential use for NAO as a training or simulation tool is in the education and research sectors.

In education, NAO can be used as an educational tool to teach students about robotics and programming. It can be programmed to perform tasks and respond to commands, allowing students to practice and learn about programming concepts in a hands-on way. NAO’s ability to recognise and follow faces and speak in multiple languages makes it a particularly useful tool for teaching language and communication skills.

In research, NAO can be used to study human-robot interaction and the potential applications of robots in different fields. Its versatility and interactivity make it a useful platform for researchers to explore and test new ideas and technologies. For example, researchers could use NAO to study how people respond to and interact with robots in different situations or to test the performance of new algorithms or sensors.

In addition to training and simulation, NAO could also be used for testing and evaluating the performance of products or systems. Its ability to move, sense, and interact with its environment makes it a useful tool for evaluating the usability and functionality of products or systems. For example, researchers could use NAO to test the safety and performance of wearable technologies or assistive devices or to evaluate the performance of systems in simulated environments.

Overall, NAO is a versatile and interactive humanoid robot that has the potential to be used as a training, simulation, and testing tool in a variety of fields. Its capabilities and applications make it a valuable tool for researchers and developers to explore and advance the potential of robotics and artificial intelligence.

Synthetic humans and humanoid robots

To conclude, synthetic humans and humanoid robots can be useful tools for training and simulation tasks, allowing individuals to practice and test procedures and technologies in a simulated environment without risking harm to real humans. However, it is important to note that these technologies are still in the early stages of development and have limitations compared to real humans.

When these humans collect and/or annotate the training datasets image by image, they deeply understand what the data looks like and why the model could exhibit specific errors in real-life situations.

Many people believe AI will eliminate a variety of human jobs, but it will also create many more. Trained humans will need to supervise and monitor AI models and ensure AI is safe, reliable, and bias-free. Therefore, working with dedicated teams of humans in the loop will be the future of trustworthy vision AI, and we will not be surprised if it becomes a real job title companies start hiring for!

In addition, the development and improvement of synthetic humans, or digital avatars, could be accelerated by humanoid robots. For example, researchers may use them as a platform for testing algorithms used to create such entities—and then refining these programs based on the results they’ve observed in virtual reality simulations.

Humanoid robots could be used to create synthetic humans that are more lifelike than existing computer-generated characters. For instance, a digital avatar could act as the projected mind of an android and allow it to interact with people in ways that would feel more tangible, immersive, or real.

Digital influencers: Lil Miquela

lilmiquela on Instagram

Lil Miquela, also known by her full name Miquela Sousa, is a digital influencer and model who has gained significant attention in the media and online communities for her unique identity as a synthetic human. Created through the use of computer-generated imagery and machine learning algorithms, Lil Miquela has amassed a following of millions on social media platforms such as Instagram and TikTok, where she often shares fashion, music, and personal content with her followers.

Despite being entirely artificial, Lil Miquela’s presence online has sparked important discussions and debates about the concept of synthetic humans and their potential impact on society. On one hand, some experts view Lil Miquela as a pioneering example of how technology can be used to create entirely new forms of identity and expression. On the other hand, others are concerned about the ethical implications of creating synthetic humans and the potential for them to blur the line between reality and fiction.

Regardless of one’s personal views on the matter, it is undeniable that Lil Miquela has pushed the boundaries of what is possible with technology and has opened up new possibilities for the future of synthetic humans. As the field of artificial intelligence and machine learning continues to advance, we will likely see more and more examples of synthetic humans like Lil Miquela in the years to come.

In conclusion, Lil Miquela and her unique identity as a synthetic human have raised important questions and sparked important conversations about the potential impact and ethical implications of synthetic humans. While it remains to be seen how this technology will be used in the future, it is clear that Lil Miquela has made a significant contribution to the concept of synthetic humans and has opened up new possibilities for the future of AI and machine learning.

Synthesised Visuals and Videos

Visuals and videos that are synthesised, also known as computer-generated imagery (CGI), are created using computer software and algorithms. These images and videos can range from simple 2D graphics to complex 3D models and animations.

CGI is a technique that is often used in the creation of synthetic humans and other digital content. CGI can be used to create realistic and detailed 3D models of humans and other objects, which can then be rendered and animated on a computer. One way that CGI affects synthetic humans is by allowing them to be created with a high level of detail and realism. With CGI, it is possible to create synthetic humans that look and behave in a very lifelike way, making them indistinguishable from real humans in some cases. This has numerous applications, such as in the creation of virtual reality experiences or in the film and entertainment industries, and CGI was extensively used in the recent Avatars, the Way of the Water.

Another way that CGI affects synthetic humans is by allowing them to be created and modified more easily and efficiently. With CGI, it is possible to quickly create and modify synthetic humans, making it easier to experiment with different designs and features. This can help to accelerate the development and evolution of synthetic humans.

CGI has made great strides in bringing synthetic humans to life, and with the increasing popularity of virtual reality technology, likely, its role in this process will only grow.

Virtual Assistants and Customer Service

Virtual assistants and chatbots have been a part of modern society for quite some time. However, synthetic humans will change this significantly by increasing their efficiency and precision, opening up various innovative opportunities for businesses.

One of the main benefits of using synthetic humans for virtual assistants is that they can provide assistance 24/7 without the need for breaks or time off. They can also handle multiple tasks simultaneously and can be programmed to provide specific information or assistance on demand. Synthetic humans can be used as virtual assistants for personal or business purposes, and they can be accessed through a variety of devices, including smartphones, tablets, and desktop computers.

In the customer service industry, synthetic humans can be used to interact with customers and provide assistance or information. They can be programmed to handle common customer inquiries and complaints, and they can provide personalised responses based on the customer’s specific needs or requests. Synthetic humans can also be used to handle customer service inquiries through social media platforms, chatbots, and other online channels. The use of synthetic humans can create an entirely new business model: one that is based on trust, and empathy and not just the ability to decipher keywords or sounds.

There are a number of real-life examples of synthetic humans being used as virtual assistants. Some examples include:

1. Replika

Replika is a personal chatbot that uses artificial intelligence and machine learning algorithms to simulate human conversation. Users can chat with Replika about their thoughts, feelings, and experiences, and the chatbot will respond in a way that is designed to be supportive and comforting. Replika is available as a mobile app and can be accessed 24/7.

2. Soul Machines

Soul Machines is a company that specialises in creating digital humans, or synthetic humans, that are designed to look and behave like real people. The company’s technology uses artificial intelligence and machine learning to create avatars that can interact with users in a natural and human-like way.

One of the key features of Soul Machines’ technology is the use of a virtual neural network, which is modeled after the human brain and is used to control the behavior and facial expressions of the digital human. The neural network is trained using data from real human subjects, which helps the digital human to better mimic human behavior and emotions.

In terms of applications, Soul Machines has primarily focused on using its technology in customer service and support, where the digital humans can interact with customers and provide information or assistance. The company has also explored using its technology in education and training, as well as in entertainment and media.

3. Synthesia.io

Synthesia is a company that specialises in creating digital humans, also known as synthetic humans or virtual humans. The company uses artificial intelligence and machine learning algorithms to create computer-generated avatars that are designed to resemble and behave like real people.

Synthesia’s digital humans can be used for a variety of purposes, including customer service, virtual assistants, and entertainment. The company has worked with a number of clients, including major brands and organizations, to create personalised digital humans that can interact with customers and provide assistance or information.

In addition to creating digital humans, Synthesia also offers a range of services related to artificial intelligence and machine learning, including data analysis, machine learning consulting, and custom software development. The company is based in Budapest, Hungary and was founded in 2016.

These are just a few examples of how synthetic humans are being used as virtual assistants in the real world. As the technology continues to advance, it is likely that we will see more and more examples of synthetic humans being used for a variety of purposes.

In short, synthetic humans have the potential to revolutionise the way that virtual assistants and customer service are delivered, providing a cost-effective and efficient alternative to using real humans for these tasks. However, it is important to consider the ethical implications of using synthetic humans in these roles, as well as the potential impact on employment and the economy.

4. Care Angel

Artificial intelligence facilitates the use of virtual nursing assistants that assist patients in conversing with them, directing them to the most appropriate care facility, and more. A virtual nurse is available 24/7 to answer queries, examine patients, and provide instant solutions.

A number of AI-powered virtual nursing assistant applications help patients and care providers interact more frequently between office visits, preventing unnecessary hospitalizations. With the help of voice and artificial intelligence, Care Angel, the world’s first virtual nurse assistant, can even perform wellness checks.

5. UneeQ

UneeQ’s digital humans are designed to provide personalised customer experiences at scale. They use artificial intelligence and natural language processing to mimic human conversation and behaviour, allowing them to engage with customers in a more human-like way.

To use UneeQ’s platform, businesses first select a digital human that meets their needs and customise it to fit their brand. They can then deploy the digital human on their website, social media, or other channels to interact with customers. The digital human is able to understand and respond to customer inquiries in real time, providing personalised responses and assistance.

One of the key benefits of UneeQ’s digital humans is their ability to handle a large volume of customer interactions without becoming overwhelmed or fatigued. This allows businesses to provide a consistent and personalised customer experience, even when dealing with high levels of traffic.

As they state in their official website, UneeQ’s digital humans are designed to revolutionise customer experiences by providing scalable human connections that can help businesses build trust and strengthen relationships with their customers.

Medical and Healthcare Applications

There is potential for synthetic humans to be used in the healthcare industry in a variety of ways. Some examples of how they could be used include:

  1. Patient education: Digital avatars could be used to provide patient education and information about various medical conditions and treatments.
  2. Telemedicine: Digital avatars could be used as a more personal and human-like alternative to traditional telemedicine platforms, allowing patients to have virtual consultations with healthcare professionals.
  3. Mental health support: Digital avatars could be used to provide mental health support and counselling, allowing patients to have more convenient and accessible access to care.
  4. Clinical trials: Digital avatars could be used to simulate patient interactions and responses, which could be useful in the development and testing of new medical treatments.

Synthetic humans have the potential to revolutionise healthcare because they can help doctors, nurses, and others provide better care more quickly. However, many challenges still need to be addressed before synthetic humans can be widely used in healthcare settings. For example, there are concerns about the accuracy and reliability of the technology, as well as issues related to patient privacy and confidentiality.

Decision support in a clinical setting

Decision support systems are tools that are designed to help healthcare professionals make informed decisions by providing them with relevant information and recommendations. In a clinical setting, decision support systems can be used to help with tasks such as diagnosis, treatment planning, and medication management.

There is potential for synthetic humans, or digital avatars, to be used as a form of decision support in a clinical setting. For example, a digital avatar could be programmed to provide information and recommendations to healthcare professionals based on a patient’s medical history, test results, and other relevant data. The avatar could also be used to present different treatment options and the potential risks and benefits of each option.

Surgeries using robotics

Robotic surgery is a type of minimally invasive surgery in which the surgeon operates on patients through computer-assisted instruments.

Robotic surgery has been shown to offer several advantages over traditional open surgery, including increased precision in the placement of instruments and devices during procedures, greater flexibility for patients with limited mobility due to illness or injury (e.g., stroke), and reduced fatigue for surgeons as a result of reductions in repetitive movements throughout long surgeries.

In the future, we could see digital avatars used in conjunction with robotics in the operating room. An avatar can provide guidance and assistance to the surgeon during a procedure or monitor the patient’s vital signs and alert the surgeon to any potential problems.

Ethical Considerations of Synthetic Humans

AI has developed rapidly in recent years, leading to rapid advances in image, audio, and video manipulation. Cloud computing, public AI algorithms, and abundant data have created a perfect storm for making deep fakes that can be distributed via social networks at scale.

It has primarily been weaponised for malicious purposes, despite its potential positive uses, such as in art, expression, accessibility, and business. The rapid rise of deep fakes threatens the well-being of individuals, businesses, society, and democracy and may even worsen the already waning trust in the media. An erosion of trust will promote a culture of factual relativism that will unravel the increasingly fragile fabric of democracy and civil society. Moreover, deepfakes enable even the least democratic and authoritarian leaders to thrive since any inconvenient truth is quickly dismissed as fake news.

Falsifying narratives using deep fakes is dangerous and can result in intentional and unintentional harm to individuals and society at large. As deep fakes are not just faked, they are so realistic that they defy our most basic senses of sight and sound. To systematise deceit, it is ethically questionable to put words into someone else’s mouth, swap faces, and create synthetic images and digital puppets of public personas to systematise deceit. Such actions should be held accountable for the potential harm to individuals and institutions.

Terrorist organizations and insurgent groups can make use of deepfakes by non-state actors to portray their adversaries as making inflammatory speeches or acting provocatively to incite anti-state sentiments among people. The violence of terrorist organizations can be easily ignited by the creation of deep fake videos that show western soldiers dishonouring religious places to inflame anti-Western emotions and fuel further discord. States can use similar tactics to spread computational propaganda against a minority community or against another country, such as posting a fake video showing a police officer shouting anti-religious slogans or a political activist advocating violence. We can achieve all of this with fewer resources, increased internet speed, and even microtargeting to galvanise support.

Fakes that are created for the purpose of intimidating, humiliating, or blackmailing individuals are unquestionably unethical, and their impact on the democratic process needs to be assessed.

Types of Deepfakes

There is potential for synthetic humans to be used to create even more convincing deepfakes, as they are able to mimic human behaviour and emotions in a more realistic and lifelike way. This could make it even harder to detect deepfakes and determine their authenticity, potentially exacerbating the problems associated with their use.

It is important to note that the use of synthetic humans to create deepfakes is not a given, and there are ways that the technology could be used to combat deepfakes as well. For example, synthetic humans could be used to create authentic-looking videos that could be used to debunk false or misleading information.

Deepfake pornography

One of the early uses of deepfakes was celebrity and revenge pornography. Deepfake pornography is a macro-context for gender inequality that targets and harms women, causing emotional and reputational harm to them exclusively. There are more than 134 million views on the top four deepfake pornographic websites, with about 96% of videos being pornographic.

In some instances, pornographic deepfakes can even result in losing a person’s job or health care coverage. They can threaten and intimidate the psychological well-being of an individual. It is disturbing and immoral to display fake pornography that is often non-consensual. Fortunately, several sites have preemptively banned this content, including Twitter and Reddit.

There is a greater degree of ethical uncertainty regarding consensual synthetic pornography. The idea that consensual deepfakes are equivalent to sexual fantasy may be argued as morally acceptable. Still, they could normalise the concept of artificial pornography, which may exacerbate concerns that pornography negatively impacts psychological and sexual development. There is also the possibility that realistic virtual avatars could lead to adverse outcomes. Acting negatively towards an avatar may be morally acceptable, but how will this affect our interactions with other people?

Synthetic resurrection

A “synthetic resurrection” is another issue of concern. In particular US states like Massachusetts and New York, individuals have the right to control the use of their likenesses for commercial purposes. This right extends to the afterlife as well in some states. In other countries, however, this may be more complex and different.

Once the individual passes away, it is often uncertain who owns a regular citizen’s or a public figure’s face and voice. Do they have the potential to be used for propaganda, publicity, or commercial gain? Deepfakes have become increasingly common in political circles as a way to misrepresent the reputations of political leaders posthumously for political and policy motives. A deceased person’s heirs may use the voice and face of the deceased for their commercial benefit, even though there are some legal protections.

Many companies offer synthetic voice as a form of bereavement therapy or a way for people to connect with their dead loved ones. Despite some arguing that it is similar to keeping pictures and videos of the deceased, the use of vocal and facial features to create synthetic digital versions is morally ambiguous. It could also be ethically problematic to create a deepfake audio or video of a loved one after they have passed away.

Deepfake voices

Despite the increasingly realistic sound of voice assistants like Alexa, Cortana, and Siri, people can still recognise them as artificial. Voice assistants can now mimic human and social speech elements, including pauses and verbal cues, thanks to advances in speech technology. Google’s Duplex, for instance, is developing voice assistant features to make calls on behalf of a person that is indistinguishable from a human voice.

Several ethical concerns arise when a human-sounding synthetic voice is possible. Since deep fake voice technology is designed to project human voices, it could undermine real social interaction. Prejudices for these tools’ training datasets may also cause racial and cultural bias.

Deepfakes can also be used maliciously by phone scammers and automated call centers to deceive people for monetary and commercial gain. Fake digital identities are used in espionage, fraud, and infiltration for a number of unethical reasons.

Training a deep-learning algorithm with natural face images results in creating a synthetic face. However, training a model with real faces without consent is unethical.

Fake news

It is now usual in politics to stretch the truth, overrepresent a policy position, and present alternate facts to sway votes and donations. Political opportunism is unethical but now commonplace.

Fake news and synthetic media could greatly affect electoral outcomes if political parties use them. Those deceived suffer great harm since it prevents them from making informed decisions in their best interests. Falsifying information about an opposition candidate or presenting a false alternative truth is a form of misleading voters into serving the interests of the deceiver. This type of conduct is unethical, and there is little legal recourse. In the same way, a deep fake used to intimidate voters is unjust.

The deep fake can also be used to misrepresent a candidate, tell a lie about them, falsely emphasise their contributions, or damage a candidate’s reputation. Deepfakes that aim to deceive, intimidate, misattribute, and inflict reputational harm are unethical. Invoking the liar’s dividend as a strategy to deceive, intimidate, misattribute, and inflict reputational harm to perpetuate disinformation is unethical.

The duty of morality

The creators and distributors of deepfakes are responsible for ensuring they are using and implementing synthetic media ethically. A company like Microsoft, Google, or Amazon that provides tooling and cloud computing to create deepfakes quickly and at scale has a moral obligation. It is imperative for social media platforms like Facebook, Twitter, LinkedIn, and TikTok, which are capable of distributing deepfakes at scale, to demonstrate an ethical and social responsibility regarding the use of deepfakes, as well as news media and journalists, legislators, and civil society organizations.

Social media platforms have an ethical duty to prevent harm. The users on these platforms have a responsibility to share and consume content. However, structural and informational imbalances make it difficult to expect them to operate in a proactive way when it comes to responding to malicious deepfakes. It might be ethically defensible to shift the burden of responsibility to malicious synthetic media to users. The platform is still responsible for identifying and preventing the spread of misleading and manipulated content.

There are often policies in place for disinformation and malicious synthetic media on social networks and technology platforms, but these must adhere to ethical principles. A deepfake, for instance, may cause significant harm (reputational or otherwise) if it causes significant damage to the platforms’ reputation. These platforms need to take actions to take action to prevent the spread of deep fakes on their networks by adding dissemination controls or differential promotional tactics like limited sharing or downranking to their networks. A second effective tool is content labelling, which needs to be done objectively and transparently, without consideration of political bias or business model.

The platforms are ethically responsible for creating and maintaining the dissemination norms within their user community. Defining community standards, defining community identity, and defining user submission constraints can impact content producers. It is possible to reinforce user behaviour consistent with norms and community guidelines, including examples of desirable behaviour and positive expectations as participants in the community. The terms of use and platform policies are crucial in preventing harmful fabricated media from spreading.

A media literacy program is one of the ethical obligations of institutions seeking to counter manipulated media. Platforms must provide users with critical media literacy skills so that they can build resiliency and engage intelligently in the process of consuming, processing, and sharing information. A practical understanding of media can allow users to be more critical and engaged citizens while still recognizing satire and parody.

The limits of digital fakery

Does synthetic data deserve all the attention?

There is no inherent correlation between synthetic data and synthetic humans. Synthetic data can be used for training purposes, for instance, while synthetic humans can be used for other variety of purposes, such as in video games, films, or virtual reality experiences.

Nonetheless, it is possible that synthetic data and synthetic humans could be used together in some way. For example, synthetic data could be used to train a machine-learning model that generates realistic images of synthetic humans. Alternatively, the generated image or behavior might serve as input for another algorithm—to predict how subjects would react in that situation.

A few techniques for generating data have been shown to be capable of closely reproducing images or text from the training data. In contrast, others are vulnerable to attacks that make them completely regurgitate it. As far as privacy is concerned, Aaron Roth, a professor of computer and information science at the University of Pennsylvania, says that even though the data may not directly correspond to real user data, it does encode sensitive information about real people.

The Future of Synthetic Humans

Digital humans could even be traced back to 2015 when Piers Smith sketched a prototype onto a napkin. Four months later, the prototype was built.

Digital humans are on a swift trajectory since then, even making the Gartner Emerging Technology Hype Cycle for 2022. As every superhero story illustrates, powerful things grow from humble beginnings.

Anyone interested in conversational AI technology must ask: what will 2023 hold for digital human innovation? According to Emergen Research, the digital human market will be worth $527.58 billion by 2030, growing 46.4% in the years leading up. But 2030 is a long way away.

My predictions cover tech, business trends, and the broader world around us.

Synthetic humans and digital twins will allow us to be more humans

As digital humans grow in popularity and awareness over the years, we see amazing people working for leading brands demanding more from them in their specific niches. As more brands get hands-on with the technology, it is logical that there’ll be more new use cases by the end of 2023 than ever before. It is fascinating to think about all these possibilities because some of them we cannot even imagine yet.

The development of digital humans as AI companions is one area in which we’ve seen especially encouraging progress in recent years.

There have been conversations about a ‘loneliness epidemic’ sweeping several countries well before COVID-19. According to Harvard, nearly 43% of American adults are lonely.

These issues were brought into stark focus by worldwide lockdowns and social distancing measures, which prevented many from interacting with their families and colleagues. Many countries are experiencing an aging population, which will only exacerbate the mental, physical, and financial problems associated with loneliness.

Those are just a few of the countless use cases that can benefit from both the scale of digital interactions and the warmth of a more human interface. In the right hands, digital humans can help solve some of the world’s biggest problems.

Lonely people can benefit from them anytime, for as long as they need them, by providing engaging, empathetic, and open interactions. People with dementia, for example, are already seeing the benefits of AI companions.

With 2023 a potential turning point for new innovative applications for digital humans, I can not wait to see what the next 12 months will bring!

In the future, brands will harness the power of personality

When people see your brand as nothing more than a bland, faceless corporation, it is difficult to build an emotional connection with them.

There is a great deal of effort put into using personality, charisma, and charm at the top of the marketing funnel, to great success. Advertising on television traditionally does this exceptionally well. While brands may have a lot of personality and distinctiveness early in the customer journey, most of that fades away later in the marketing journey. You can not expect Colonel Sanders or Jake from State Farm to be there every step of the way to hand you fried chicken or answer questions about your policy over the phone. Although they could, they have not yet.

People can interact with brands and their ambassadors at multiple points along the marketing journey if they recreate them as digital humans.

Here’s an example of the impact this has.

In April 2021, we witnessed a launch of Digital Einstein that made possible due to partnership with the Hebrew University of Jerusalem and Greenlight Rights, intending to educate people about Einstein – and who better to tell them about his life and accomplishments than the man himself?

It is obvious that Einstein had a unique personality. Every brand has one, from mascots to ambassadors to founders to captivating CEOs. A full-funnel marketing superpower is integrating brand values with that personality, making it interactive, and creating a fun experience – and we are excited to see brands benefit from this in 2023.

An increased level of personalization will lead to a deeper level of engagement

It is no longer enough to establish a strong brand personality when it comes to engaging customers. Organizations must also understand their customers’ personalities, including hobbies, interests, and lifestyle choices.

People today expect personalization even before they’ve become loyal customers. Those first interactions between a brand and a potential customer are often critical moments that shape a person’s perception of the brand, possibly for good or ill.

Expectations of personalization only grow with every subsequent interaction:

There are risks involved, but are there rewards as well?

According to ThinkPatented, 99% of marketers think personalization improves customer relationships, and according to Forbes, 97% claim it boosts business results.

Brands can provide better personalization by utilizing digital humans in the following ways:

  1. The names of your customers (or other identifiers) can be used
  2. Embracing your brand values and those of your customers
  3. Integrating multiple touchpoints seamlessly
  4. Engage your customers by speaking their language
  5. Making predictions about future purchases
  6. Availability whenever needed

With these ingredients, I expect more brands to integrate personalization into their marketing for 2023.

How Can Brands Use Synthetic Humans for Personalised Experiences?

There are a number of ways that brands could use digital humans to provide better personalization to their customers. Here are a few examples:

Virtual customer service

As mentioned before, brands could use digital humans to provide personalised customer service through chatbots or virtual assistants. This could allow brands to offer 24/7 support to their customers while also providing a more personalised and human-like experience.

Personalised product recommendations

Brands could use digital humans to provide personalised product recommendations to customers based on their past purchase history and other factors. For example, a digital human might be able to recommend products to a customer based on their style preferences or specific needs.

Targeted marketing

Brands could use digital humans to deliver targeted marketing messages to specific segments of their customer base. For example, a digital human might be able to deliver personalised messages to young, fashion-conscious customers, while a different digital human could be used to target an older, more conservative demographic.

Virtual try-on

Brands could use digital humans to allow customers to virtually “try on” clothing or other products before they make a purchase. This could be especially useful for customers who are shopping online and want to see how a product might look on them before they commit to a purchase.

Virtual events

Brands could use digital humans to host virtual events, such as product launches or Q&A sessions. This could allow brands to reach a wider audience and provide a more personalised experience for attendees.

As AI technology converges, the customer experience will improve

Since 2018, the amount of deepfake videos generated by AI has doubled every six months, according to one report created in 2021 from Cybernews.

However, digital humans do work well with other kinds of synthetic media, even though they do not fit squarely into that category.

In the past few years, voice technologies have become increasingly sophisticated (from Microsoft, Google, Amazon, Veritone, and a host of others). AI convergence is increasing the real-time capabilities of digital humans as language models like GPT-3, real-time translation APIs, and computer vision and analytics platforms become more powerful. The most recent example of this application is the creation of ChatGPT by OpenAI, being the most sophisticated chatbot to date.

When technologies are integrated well, the customer wins with a more seamless, more powerful, and better-orchestrated experience.

Kiosks will be disrupted (in the best way possible)

Digital humans made their homes in interactive kiosks. Just ask your UneeQ customer success representative about the time we visualised a seven-foot digital human kiosk to go into Auckland International Airport.

In retail environments today, the convergence of kiosk screens and intelligent devices makes kiosk interactions much more dynamic than they used to be.

We predict that the kiosk will not just serve as an early home for digital humans but will also serve as a valuable medium throughout a lifetime. With 5G networks and the sophistication of modern kiosks, digital humans can connect the kiosk and mobile experience enjoyably and conversationally in 2023.

Digital humans will populate the metaverse

The word ‘metaverse’ was not well-known outside of science fiction circles until late 2021 when Facebook changed its name to Meta. Today, a lot of people might still be unaware of what it is, but that’s likely to change by 2023.

In a metaverse, people can meet, interact and do everyday things without the usual physical constraints of space, providing immersive social experiences.

Shopping and other commercial activities also fall under the category of ‘everyday things,’ which is why countless brands and companies are already excited about the opportunities. All these things could be made available virtually in the metaverse – stores, events, conferences, showrooms, customer service channels, etc.

In the metaverse, we’ll need real people, but also autonomous AIs, much like you’d see in video games. As in video games, these AIs will need personality, function, and the ability to interact autonomously with other people in the metaverse. Brands can embody themselves and serve in the metaverse using digital humans, which is the best existing technology to do that.

There will be digital humans that can converse openly and perform tasks for metaverse users. Could a Domino’s digital human deliver pizza to your door? It is entirely possible – and we’ll be asking (and answering) more questions like these in the coming years.

While the metaverse is still in its infancy, it will not mature into adulthood until 2025 at least. In spite of this, brands are starting to plan for the metaverse now because it will bring such a significant change. The first baby steps for many new brands are expected to occur in the coming months.

Conclusion

Far from the realm of science fiction, we are already observing the implementation of synthetic humans and digital twins into our lives. Whether it is in marketing, education, or medicine, these cutting-edge technologies are already having a massive impact on our lives.

These advancements raise interesting questions about how we view the concept of humanity and, ultimately, the definition of life itself. In addition, this technology may find its way into day-to-day living in ways we never thought possible. There may come a point in our future where synthetic humans surround us, or rather systems capable of mimicking life in remarkable ways — systems that might hold all the same abilities and characteristics that we once thought only humans could achieve. Very possibly, this will be the new reality of human interaction.

While creating synthetic humans and digital twins may seem like a far-off idea today, as the technology advances, it will evolve into something more commonplace. It may be useful to prepare for this now so that we can use artificial life forms to enhance and augment budgets or to increase productivity. At the end of the day, if you can use AI to help you get your job done better, why not?

Article: Synthetic Humans: Digital Twins Living and Breathing Online

Leave a Reply

Your email address will not be published. Required fields are marked *