Happiness Interfaces
There are numerous
physically-challenged-people around the world, and I’ve seen some cases that
the physically-challenged-person or his family members or friends could not be
happy due to barriers of movement caused by their physical defects.
The research I would like to purse is the Interfaces Platform for communication between human being and VH that can increase happiness of physically-challenged-people and their family members by satisfying their desires in movement while using ‘Advanced Augments Reality Glass’ and ‘Virtual Human.’ And to create a prototype of the research, there are several necessary ‘settings’ that can be realized by current technologies and future technologies to be investigated. I guess more considerations and conditions will be required in its actual research, but for now for your understanding on my research plan, I would like to list the minimum as below.
1. AAR Glass is used for PhysioHMD and for a different purpose, and another technology is applied. Basically, AAR Glass can be used by AR function of the latest AR Glass developed already at present. Better AR functions will be helpful to the research, and additional devices and functions are required as follow.
- 18 high-performance cameras will be installed so as to collect 360-degree external visual information that the AAR Glass wearer faces from front, rear, left, right, upper, under, high and low.
- AAR Glass will be installed by audio I/O device that can collect and transmit all auditory information and the wearer’s voice that the AAR Glass wearer can hear.
- A ‘film lens camera’ will be attached to the internal surface of the AAR Glass in order to analyze its wearer’s facial expressions.
- The ‘film lens camera’ is a representative future technology which needs to be newly devised when creating a prototype. It is a neoconceptual ultra-thin-layer camera that can be attached to the glass surface transparently. The ‘film lens camera’ recognizes the wearer’s iris information and muscle movements around eyes, and then AI analysis system analyzes the information, making happy and pleasant faces that are the same to the actual facial expression of the wearer.
- An environment sensor that can collect information of weather including temperature, moisture, wind, rain and others is installed to the AAR Glass.
- An olfactory sensor that can collect olfactory information around the wearer is installed to the AAR Glass.
2. To complete Interfaces Platform for the communication between human being and ‘VH,’ an AI analysis system is necessary, and it shall qualify the following key functions.
- AAR Glass that a family member(Main-character 1) wears collects visual, auditory, olfactory, weather, facial expressions and iris information, and then, AI analyzes what the family member(Main-character 1) saw, heard, spoke and felt, creating emotional expressions, facial and movement information of ‘VH1(Sub-character 1)’ and making its persona. It is important for ‘VH1(Sub-character 1)’s persona to embody what the family member(Main-character 1) saw, heard, spoke and felt exactly the same with the ones of the actual human being.
- Although a physically-challenged person(Main-character 2) wears the AAR Glass at a separated place from his family member, the AAR Glass worn by the physically-challenged person(Main-character 2) vividly shows the visual, auditory, olfactory and weather information from AAR Glass worn by his family member(Main-character 1) as if he is with his family in the same place. Also in the same scene, it shows the persona of ‘VH1(Sub-character 1) having exactly the same facial expression, voice tone, emotional expression and gestures with the actual family member that are analyzed and provided by AI. And during this phase, when presenting AAR Glass’ information of the real world and VH1’s persona in the same scene, detailed VFX work and 3D vision work are required to amplify reality.
- The ‘film lens camera’ recognizes the persona of the family member’s ‘VH1’ appearing on the screen of AAR Glass worn by the physically-challenged-person(Main-character 2) and the facial expressions that are made when hearing and sensing information of the real world, and then AI analyzes such information and makes the persona of ‘VH2(Sub-character 2).’ The persona of ‘VH2’ shall be exactly the same with the actual physically-challenged-person(Main-character 2) at a level that no one can distinguish between the two. Of course, ‘VH2(Sub-character 2) is a persona of a virtual human having no physical disabilities.
- As such, when ‘VH2(Sub-character 2)’s persona created by AI analysis system appears with an appearance exactly the same with the actual appearance of the physically-challenged-person’s appearance before being disabled, it is important to make the family member(Main-character 1) feel as if he is in the same place with the physically-challenged-person.
- Therefore, the family member and the physically-challenged-person can feel happiness by experiencing getting together in the same place via each one’s AAR Glass while they are actually in different places.
3. For the persona of ‘VH1’ and the persona of ‘VH2,’ it is crucial to design identically with the actual human being(Main character) by using 3D vision and deep fake technologies and 3d graphic and interaction designs, etc.
4. Besides, there are many factors to consider specifically while creating a persona.
I expect numerous results from this research and predict that it will eventually enable physically-challenged-people to do everything that was impossible for them to do outside of their houses due to their physical disabilities.
By using the results of this research, the persona of the physically-challenged-person can go anywhere with his family while he still stays at home. VH’s persona can make new gathering with other people, go to see a movie, and take a trip with friends. They can experience a lot more than they can do at the present and feel more happiness accordingly.
The research to study interfaces platform for communication between human being and ‘VH’ will be a new starting point for further researches, too. There can be diverse expanded studies using design fictions such as ‘a study on interfaces enabling communication between human being and VH by using simple tools,’ and ‘a study on interfaces enabling communication between VH and VH without human being’s intervention.’ If the communication between VH and VH without intervention of human being and the communication between VH and VH by using simple tools are possible in the virtual world, this will bring an innovative change to the future society with no borderline between the real world and the virtual world.
And for the study described above, my roles are as below:
1. VH’s persona design: AI analyzes personality, taste, voice tone, gesture, habits, typical behavior, manners of the VH’s Main Character(Shadow Actor), and based on this, plans and VH’s persona and creates relevant storytelling.
2. Face design of VH’s persona: By using AI vision and deep fake technologies, I need to make 3D images that cannot be distinguished from the main character’s actual appearance.
3. Body design of VH’s persona: Based on the data of the main character’s body shapes taken from diverse angles, I need to make Full 3D body fake data that cannot be distinguished with the actual main character by using AI deep learning, and create bodies as per each story, place and circumstance.
4. Background manufacturing which needs to be included in videos: In order to make a realistic combination between the real world’s image sent by VFX(Visual Effects) from the main character and Full 3D VH, I need to make works on correction processing by using CG Special Effect and others.
5. As a project manager, I also need to be a person in charge of project schedule management and share it with other members, and a communication channel among group members.
The research I would like to purse is the Interfaces Platform for communication between human being and VH that can increase happiness of physically-challenged-people and their family members by satisfying their desires in movement while using ‘Advanced Augments Reality Glass’ and ‘Virtual Human.’ And to create a prototype of the research, there are several necessary ‘settings’ that can be realized by current technologies and future technologies to be investigated. I guess more considerations and conditions will be required in its actual research, but for now for your understanding on my research plan, I would like to list the minimum as below.
1. AAR Glass is used for PhysioHMD and for a different purpose, and another technology is applied. Basically, AAR Glass can be used by AR function of the latest AR Glass developed already at present. Better AR functions will be helpful to the research, and additional devices and functions are required as follow.
- 18 high-performance cameras will be installed so as to collect 360-degree external visual information that the AAR Glass wearer faces from front, rear, left, right, upper, under, high and low.
- AAR Glass will be installed by audio I/O device that can collect and transmit all auditory information and the wearer’s voice that the AAR Glass wearer can hear.
- A ‘film lens camera’ will be attached to the internal surface of the AAR Glass in order to analyze its wearer’s facial expressions.
- The ‘film lens camera’ is a representative future technology which needs to be newly devised when creating a prototype. It is a neoconceptual ultra-thin-layer camera that can be attached to the glass surface transparently. The ‘film lens camera’ recognizes the wearer’s iris information and muscle movements around eyes, and then AI analysis system analyzes the information, making happy and pleasant faces that are the same to the actual facial expression of the wearer.
- An environment sensor that can collect information of weather including temperature, moisture, wind, rain and others is installed to the AAR Glass.
- An olfactory sensor that can collect olfactory information around the wearer is installed to the AAR Glass.
2. To complete Interfaces Platform for the communication between human being and ‘VH,’ an AI analysis system is necessary, and it shall qualify the following key functions.
- AAR Glass that a family member(Main-character 1) wears collects visual, auditory, olfactory, weather, facial expressions and iris information, and then, AI analyzes what the family member(Main-character 1) saw, heard, spoke and felt, creating emotional expressions, facial and movement information of ‘VH1(Sub-character 1)’ and making its persona. It is important for ‘VH1(Sub-character 1)’s persona to embody what the family member(Main-character 1) saw, heard, spoke and felt exactly the same with the ones of the actual human being.
- Although a physically-challenged person(Main-character 2) wears the AAR Glass at a separated place from his family member, the AAR Glass worn by the physically-challenged person(Main-character 2) vividly shows the visual, auditory, olfactory and weather information from AAR Glass worn by his family member(Main-character 1) as if he is with his family in the same place. Also in the same scene, it shows the persona of ‘VH1(Sub-character 1) having exactly the same facial expression, voice tone, emotional expression and gestures with the actual family member that are analyzed and provided by AI. And during this phase, when presenting AAR Glass’ information of the real world and VH1’s persona in the same scene, detailed VFX work and 3D vision work are required to amplify reality.
- The ‘film lens camera’ recognizes the persona of the family member’s ‘VH1’ appearing on the screen of AAR Glass worn by the physically-challenged-person(Main-character 2) and the facial expressions that are made when hearing and sensing information of the real world, and then AI analyzes such information and makes the persona of ‘VH2(Sub-character 2).’ The persona of ‘VH2’ shall be exactly the same with the actual physically-challenged-person(Main-character 2) at a level that no one can distinguish between the two. Of course, ‘VH2(Sub-character 2) is a persona of a virtual human having no physical disabilities.
- As such, when ‘VH2(Sub-character 2)’s persona created by AI analysis system appears with an appearance exactly the same with the actual appearance of the physically-challenged-person’s appearance before being disabled, it is important to make the family member(Main-character 1) feel as if he is in the same place with the physically-challenged-person.
- Therefore, the family member and the physically-challenged-person can feel happiness by experiencing getting together in the same place via each one’s AAR Glass while they are actually in different places.
3. For the persona of ‘VH1’ and the persona of ‘VH2,’ it is crucial to design identically with the actual human being(Main character) by using 3D vision and deep fake technologies and 3d graphic and interaction designs, etc.
4. Besides, there are many factors to consider specifically while creating a persona.
I expect numerous results from this research and predict that it will eventually enable physically-challenged-people to do everything that was impossible for them to do outside of their houses due to their physical disabilities.
By using the results of this research, the persona of the physically-challenged-person can go anywhere with his family while he still stays at home. VH’s persona can make new gathering with other people, go to see a movie, and take a trip with friends. They can experience a lot more than they can do at the present and feel more happiness accordingly.
The research to study interfaces platform for communication between human being and ‘VH’ will be a new starting point for further researches, too. There can be diverse expanded studies using design fictions such as ‘a study on interfaces enabling communication between human being and VH by using simple tools,’ and ‘a study on interfaces enabling communication between VH and VH without human being’s intervention.’ If the communication between VH and VH without intervention of human being and the communication between VH and VH by using simple tools are possible in the virtual world, this will bring an innovative change to the future society with no borderline between the real world and the virtual world.
And for the study described above, my roles are as below:
1. VH’s persona design: AI analyzes personality, taste, voice tone, gesture, habits, typical behavior, manners of the VH’s Main Character(Shadow Actor), and based on this, plans and VH’s persona and creates relevant storytelling.
2. Face design of VH’s persona: By using AI vision and deep fake technologies, I need to make 3D images that cannot be distinguished from the main character’s actual appearance.
3. Body design of VH’s persona: Based on the data of the main character’s body shapes taken from diverse angles, I need to make Full 3D body fake data that cannot be distinguished with the actual main character by using AI deep learning, and create bodies as per each story, place and circumstance.
4. Background manufacturing which needs to be included in videos: In order to make a realistic combination between the real world’s image sent by VFX(Visual Effects) from the main character and Full 3D VH, I need to make works on correction processing by using CG Special Effect and others.
5. As a project manager, I also need to be a person in charge of project schedule management and share it with other members, and a communication channel among group members.
Human-friendly VH having emotions and autonomous communication skills
We now experience metaverse in life
logging, augmented reality, virtual reality and mirror world. Future
generations will live in the metaverse which combines the real world and the virtual
world. Also, the amount of time that human being work, meet, experience and
enjoy in the virtual world will be increased more and more. At this phase, so
as to make VH(Virtual Human)’s persona the actual human being, the
‘communication’ between VH and VH shall be autonomous. And I would like to
conduct research on ‘human-friendly VH having emotions and autonomous
communication skills.’ All around the world, there are many people mentally
suffering from universal and irresistible solitude or even committing suicides.
There are even nations to treat solitude as a social infectious disease. The
biggest reason of solitude is the decrease in human relationships and reduced
communications from it. Hence, if there is no special measure to treat this
problem in the future generation, more suicide will be committed due to
solitude. For the reason, I would like to regard this research as a study to
make ‘human-friendly VH having emotions and autonomous communication skills’ as
a remedy to overcome solitude easily.
For creating the prototype of this research, multidisciplinary collaboration is mandatory. Also, there shall be designs to be made on AI analysis system and human-friendly VH, and it is expected that more considerations will be necessary when the actual study proceeds.
This research starts in two aspects. First, it needs to create an AI having humane emotions and autonomous communication skills. Second, it also needs to make VH which as human-friendly face, body, voice and intelligence that can reduce solitude for a lonely person only by seeing the VH.
1. AI Creation: AI deep learning, Big data analysis to create sensibility and emotions that human beings have.
- Multidisciplinary collaboration among psychology, medical science, technology, science and design.
- My roles: Participating in the scenario writing for the development of AI analysis system, and interface scenario planning between AI and VH.
2. VH Creation: This is the main part that I need to be in charge of.
- VH Persona face design: By using diverse fake data using AI deep fake, AI deep learning, I need to create a human-friendly VH character in full 3D vision.
- VH Persona making: Creating unique voice, body, intelligence and others(personality, hobby, tastes according to research purpose) by analyzing diverse human-friendly voices using AI deep learning utilizing.
Since the human-friendly VH ‘having emotions and autonomous communication skills’ born by this research can communicate, comfort and emphasize with others, people can escape from solitude. This shows a possibility that this research can be expanded to diverse sectors related with human emotions including medical science, education, and etc.
For creating the prototype of this research, multidisciplinary collaboration is mandatory. Also, there shall be designs to be made on AI analysis system and human-friendly VH, and it is expected that more considerations will be necessary when the actual study proceeds.
This research starts in two aspects. First, it needs to create an AI having humane emotions and autonomous communication skills. Second, it also needs to make VH which as human-friendly face, body, voice and intelligence that can reduce solitude for a lonely person only by seeing the VH.
1. AI Creation: AI deep learning, Big data analysis to create sensibility and emotions that human beings have.
- Multidisciplinary collaboration among psychology, medical science, technology, science and design.
- My roles: Participating in the scenario writing for the development of AI analysis system, and interface scenario planning between AI and VH.
2. VH Creation: This is the main part that I need to be in charge of.
- VH Persona face design: By using diverse fake data using AI deep fake, AI deep learning, I need to create a human-friendly VH character in full 3D vision.
- VH Persona making: Creating unique voice, body, intelligence and others(personality, hobby, tastes according to research purpose) by analyzing diverse human-friendly voices using AI deep learning utilizing.
Since the human-friendly VH ‘having emotions and autonomous communication skills’ born by this research can communicate, comfort and emphasize with others, people can escape from solitude. This shows a possibility that this research can be expanded to diverse sectors related with human emotions including medical science, education, and etc.
A visual art that can be “seen” by hearing, visual art that is “heard” by sight, and visual art that is “touched” by sight, using AI Art
Visual art that can be
“seen” by hearing(ear) is a study on the visual art that visually impaired
people can see through their hearing. AI and Deep Learning introduce a new
concept of ‘applying visual art to sound’ to create research results. Moreover,
it is a study that makes the impressions seen and felt by non-disabled people
almost coincide with the feelings heard and felt by the visually impaired people.
Visual art that can be “heard” by sight(eye) is a study on the visual art that hearing impaired people can hear through their sight. AI and Deep Learning introduce a new concept of ‘applying sound to visual art’ to create research results. This is a study that makes the impressions heard and felt by non-disabled people almost coincide with the feelings seen and felt by the hearing-impaired people.
The visual art of “seeing” with the ear is a concept distinct from the art of hearing with the ear. In general, it is expressed as ‘hearing sound with the ear’, not ‘seeing sound with the ear’. The way people feel when they hear a sound is subjective because it differs from person to person. On the other hand, the feelings people feel when they see things with their eyes are objective because what they feel when they see things are similar. For example, people's impressions of seeing a bus are similar because they are actually looking at the same bus. However, when people hear the sound/noise of the bus “vroom” in the same place, it is difficult to distinguish whether the sound is from a bus, a car, a city bus, or an express bus. So the feeling of the sound “vroom” can be different for each person. Therefore, the conceptual expression of 'seeing a sound with the ear' means that the feeling of hearing a specific sound with the ear can be objectified as if it were seen with the eyes. In other words, the concept of 'seeing with the ear' refers to a visualization process that converts the subjective feeling of sound into the feeling of objective vision.
Visual art that can be “touched” by sight(eye) is a study on the visual art that physically challenged people can touch through their eyes. AI and Deep Learning introduce a new concept of ‘applying touch to visual art’ to create research results. This is a study that makes the impressions touched and felt by non-disabled people almost coincide with the feelings seen and felt by the physically challenged people.
It is expected that the new visual art works created as a result of this study will give innovative new impressions to the audience. Above all, the visually impaired, hearing-impaired, physically challenged people can feel similar appreciation to the same art works as non-disabled people. I hope it will contribute to resolving/reducing artistic inequality. In addition, this research can be extended to new research in various fields.
Visual art that can be “heard” by sight(eye) is a study on the visual art that hearing impaired people can hear through their sight. AI and Deep Learning introduce a new concept of ‘applying sound to visual art’ to create research results. This is a study that makes the impressions heard and felt by non-disabled people almost coincide with the feelings seen and felt by the hearing-impaired people.
The visual art of “seeing” with the ear is a concept distinct from the art of hearing with the ear. In general, it is expressed as ‘hearing sound with the ear’, not ‘seeing sound with the ear’. The way people feel when they hear a sound is subjective because it differs from person to person. On the other hand, the feelings people feel when they see things with their eyes are objective because what they feel when they see things are similar. For example, people's impressions of seeing a bus are similar because they are actually looking at the same bus. However, when people hear the sound/noise of the bus “vroom” in the same place, it is difficult to distinguish whether the sound is from a bus, a car, a city bus, or an express bus. So the feeling of the sound “vroom” can be different for each person. Therefore, the conceptual expression of 'seeing a sound with the ear' means that the feeling of hearing a specific sound with the ear can be objectified as if it were seen with the eyes. In other words, the concept of 'seeing with the ear' refers to a visualization process that converts the subjective feeling of sound into the feeling of objective vision.
Visual art that can be “touched” by sight(eye) is a study on the visual art that physically challenged people can touch through their eyes. AI and Deep Learning introduce a new concept of ‘applying touch to visual art’ to create research results. This is a study that makes the impressions touched and felt by non-disabled people almost coincide with the feelings seen and felt by the physically challenged people.
It is expected that the new visual art works created as a result of this study will give innovative new impressions to the audience. Above all, the visually impaired, hearing-impaired, physically challenged people can feel similar appreciation to the same art works as non-disabled people. I hope it will contribute to resolving/reducing artistic inequality. In addition, this research can be extended to new research in various fields.