Blog Note. In the Bible’s book of the Revelation of Jesus Christ, there are numerous verses and references to the deception of men on earth and ‘false’ miracles. Elsewhere in the Bible, God warns man not to make graven images of anything on heaven or on earth. This blog author has written previous articles on the use of technology in the deception of men. I addressed this issue in my upcoming book. Genetic manipulation, video and voice digitization, 3D manufacturing technology, CRISPR, artificial intelligence, biometric scanning, computer green screen imaging. Revelation 13:14 indicates, “And deceiveth them that dwell on the earth by the means of those miracles which he had power to do in the sight of the beast; saying to them that dwell on the earth, that they should make an image to the beast, which had the wound by a sword, and did live.” Many peoples living in remote places, third world countries and other areas with limited education are not aware of the incredible advances that are taking place today in the technological areas mentioned above. During the Tribulation period, millions of these people will be easily fooled “deceived” by the false miracles created by these types of technologies. Merging several of these technologies together can easily create “an image of the beast” that can speak or act ‘alive.’ Any recent Hollywood block buster movie easily attests to the realistic imagery that can be created. End of note.
“Knowledge will be increased.” (Daniel 12:4 )
Six Man-Made Technologies Used by Satan/Beast/False Prophet:
- Global Monitoring, Biometric Scanning.
- Genetic Manipulation, Sequencing, Splicing. (CRISPR).
- Global Communications.
- Digital-Cashless Electronic Financial Transactions and Processing.
- Globalized and interconnected, networked data warehousing and information technology systems.
- Artificial Intelligence (A.I.), Robotics, 3-D Manufacturing Technology.
Inside the bizarre human job of being a face for artificial intelligence:
The avatar for AI technology Amelia (right) is based on Lauren Hayes (left). (Courtesy of IPsoft)
(Excerpts) … Until she put ‘Amelia’ in a search window long after the project had wrapped, however, she hadn’t imagined a fully animated version of her likeness—or that it would be programmed to converse with humans. “It was really creepy,” she says. “I didn’t imagine it would be so realistic. I didn’t realize it would talk or have motion.” That was Amelia 1.0. Later versions of Amelia will be even more realistic. For Amelia 3.0, which hasn’t launched yet, IPsoft flew Hayes to Serbia, to a studio that specializes in making digital characters for movies and video games. This time, in addition to the “death star” 3D body scanning, the studio cataloged Hayes’ movements. She spent a day doing, she says, “basically anything anyone could ever ask Amelia to do.” When prompted, she pretended she had just seen Brad Pitt, for instance, and that she had just seen her best friend. Dots on her face helped cameras track her specific expressions, which will be used to help animate Amelia’s face in real time. Outside of the death star, a movement suit with motion-capture sensors mapped her mannerisms and actions.
What is the point of making Amelia’s avatar so realistic? Or creating a human persona for her at all? When you talk to somebody, there is all sorts of non-verbal communication. The avatar itself helps with empathy. If the end user feels like they’re being heard and understood, they’re more likely to engage further and in more length. And that allows Amelia to grasp the intent of what the user is trying to say.” The avatar itself is programmed to react to human conversations with appropriate expressions and actions (so Amelia doesn’t smile, for instance, when an insurance client explains that they’ve just been diagnosed with a terrible disease).
IPsoft isn’t the only company that goes to great lengths to make its automation technology seem more relatable–whether that involves coming up with a backstory, mimicking tone and emotion in speech, or making their avatars actually look human.
Reardon says that the humanness of Amelia is partly intended to help workers feel more comfortable interacting with her. “They won’t feel, do I need an instruction manual to work with an AI? We all instinctively know how to communicate with each other well.” Most of Amelia’s appearances do not come with the full animated avatar. Only in special implementations, such as at a customer service kiosk, does Amelia’s full life-like avatar make an appearance.
IPsoft is working with some clients to create customized versions of Amelia’s avatar, for instance. The avatar based on Hayes is IPsoft’s branded version of the technology, and it helps position the way that companies think about and introduce it. That position is going to get more realistic. Edwin van Bommel, IPsoft’s chief cognitive officer, says that the company is careful to avoid the “uncanny valley,” the point at which Amelia is so realistic, but still slightly off, that it’s creepy. But it’s a moving target,” he says. Culture is getting accustomed to the idea of human-looking artificial intelligence.
In version 3 of Amelia, her face will move in more ways and be so detailed that you can see her pores. Like Hayes’ face and all human faces, it will be slightly asymmetrical. “If I filmed Lauren and Amelia at the same time, and had them walk across the screen, you wouldn’t be able to tell the difference between the two of them,” Reardon says. Since most phones and computers can’t render an image that detailed, especially in real time, that won’t be the version of Amelia users see at the 50 global companies where she’s deployed.
Most likely, it will be used in demonstrations like the game show between Amelia and Hayes. On stage, Hayes easily responds to quiz questions faster than Amelia, and with more natural, human language. When their photos look exactly the same, in this way, it will still possible to tell Hayes and Amelia apart—at least for now.
Marilyn Monroe to be brought back to life, in digital avatar:
The iconic actor who died from apparent drug overdose is being brought back to live using computer wizardry. Fans of the iconic Monroe can rejoice as she is being brought back to life with latest technology, using a ‘digital double.’ The movie icon who died from an apparent drug overdose in 1952, aged just 36, is being brought back to life, thanks to computer wizardry. But a new film of her life is in the pipeline using a Marilyn lookalike and the cutting-edge technology featured in Hollywood blockbusters. Actress and model Suzie Kennedy was turned into a Marilyn avatar at the world famous Pinewood Studios, home of Star Wars and James Bond. The 41-year-old actor spent hours having her face and body scanned to produce a ‘digital double’ which will play the part of the troubled star in the movie. To make the digital Marilyn, Suzie had more than 3,000 photos of her face and body taken, working with Amanda Darby, head of Pinewood 3D and had to stand on a platform surrounded by 181 cameras snapping every inch of her. Markers were drawn on Kennedy’s body and another 60 cameras were used to pick up her facial expressions. She then had a motion capture session with Phil Stilgoe of Centroid, experts in the field, in which she moved about in a bodysuit with a helmet and camera attached to map all her movements.
Fake Video Could Make You Question Everything You See:
“The idea that someone could put another person’s face on an individual’s body, would be like a homerun for anyone who wants to interfere in a political process. This is now going to be the new reality, surely by 2020, but potentially even as early as this year.” — Senator Mark Warner (D-VA). New technology with the ability to create hyper-realistic fake videos has the potential to wreak havoc on the political landscape, lawmakers and technology experts say. The tech allows people’s faces to be superimposed onto different bodies in other videos. Different technology can also allow facial expressions to be altered. Currently, fake video technology requires manipulation of existing video footage of a person, and cannot create fake video from scratch with just a picture. The combination of the different emerging technologies means it is highly likely we will soon see videos of public figures saying and doing things which never happened, that are all but indistinguishable from the real thing. The website deepfakes.club offers tutorials to anyone with an internet connection on how to create fake videos. Nor are government attempts to develop reliable ways of authenticating content likely to be effective. “We all will need some form of authenticating our identity through biometrics. This way people will know whether the voice or image is real or from an impersonator,” Congressman Ro Hanna (D-CA) told The Hill.
Expert warns of “terrifying” potential of digitally-altered video:
Alec Baldwin’s “SNL” act as Donald Trump was digitally altered with the President’s face to create a fake Trump debate performance. Alec Baldwin is to some a perfect stand-in for President Trump. But in a digitally-altered video online, the president’s face has been digitally stamped onto Baldwin’s performance. It’s part of a wave of doctored audio and video now spreading online. “The idea that someone could put another person’s face on an individual’s body, would be like a homerun for anyone who wants to interfere in a political process,” said Virginia Senator Mark Warner. He believes manipulated video could be a game-changer in global politics. “This is now going to be the new reality, surely by 2020, but potentially even as early as this year,” he said. “Deepfakes” is the anonymous YouTuber who has made fake videos of President Trump, Hillary Clinton and Vladimir Putin, based off of performances by the cast of “Saturday Night Live.” In a message to CBS News, he said he does it for “fun.” And though he sees the potential for fake news, he adds: “People will have to adapt as the tech is here to stay.” Hany Farid runs a lab at Dartmouth College aimed at exposing digital fakes. Correspondent Tony Dokoupil asked Farid, “Are we ready for this?”
“No. We are absolutely not ready for this. We are absolutely not ready for it,” Farid replied. “On so many different levels, we’re not ready for it.” For starters, Dokoupil asked Farid to make a fake video. “I want to replace your face with Nicholas Cage’s,” he said. Why Nick Cage? “Just because it’s awesome,” Farid laughed. “No other reason.” The result: “You can look at that all day long, and that, I tell you, is a pretty compelling fake,” Farid said. The method, recently published online by an anonymous developer, is one of several that Farid is tracking. This program demonstrated in the video below can change facial expressions in real-time. And there is an Adobe program that can create new audio from written text. “Right out of the gate, that’s terrifying,” Farid said. “I mean, that is just terrifying. Now I can create the president of the United States saying just about anything.” Adobe calls this an “early-stage research project.” While the company acknowledges the potential for “objectionable use,” it believes “the positive impact of technology will always overshadow the negative.” All these methods have legitimate uses in digital video and design. But Farid worries they’ll be weaponized, too. “I think the nightmare situation is a fake video of a politician saying, ‘I have launched nuclear weapons against a country.’ The other country reacts within minutes, seconds, and we have a global nuclear war,” Farid said. His lab is developing tools to quickly identify fakes. But Farid suspects this is just the beginning of a longer struggle. “We have a ‘fake news’ phenomenon that is not going away,” he said. “And so add to that fake images, fake audio, fake video, and you have an explosion of what I would call an information war.” 2018 CBS Interactive Inc. All Rights Reserved.