Don Levy, director of the Siena College Research Institute. “The only good news for President Biden is that it was conducted a year before voters go to the polls,” said Dr. The only battleground state in which Trump is losing is Wisconsin by three points, but only if another Democrat - not Biden - runs.Īcross the six battleground states, 59% disapprove of Biden’s job performance 71% say he’s too old to be president and 62% say he lacks the mental sharpness to be an effective president. The poll, conducted in conjunction with the Siena College Research Institute, also shows Trump leading Biden in Nevada by 11 points Arizona by five points Pennsylvania by four points and Michigan by three points. (Atlanta News First) - A New York Times poll shows former President Donald Trump leading his Oval Office successor, Joe Biden, by seven points in Georgia. Crafting a new false narrative can now be done at dramatic scale, and much more frequently – it’s like having AI agents contributing to disinformation,” Gordon Crovitz, co-chief of NewsGuard, a misinformation tracking company, told The New York Times.ATLANTA, Ga. “This tool is going to be the most powerful tool for spreading misinformation that has ever been on the internet. OpenAI did not immediately respond to The Independent’s request for comment. She called on OpenAI to discuss how it could partner with social media channels to auto-recognize and label AI-generated videos shared on platforms, along with developing guidelines for labeling such content. “If that AI-generated video of an impossibly long line of people in torrential downpour is used by an adversary to post on social media on Election Day, now it could be used to convince certain folks to stay home and avoid the polls and line/weather,” the hacker explained. “Take my example above, prompting this AI tool for ‘a video of a very long line of people waiting in a torrential downpour outside a building’ isn’t in violation of these policies - the danger is in how it’s used,” she explained. “We are working with red teamers – domain experts in areas like misinformation, hateful content, and bias - who are adversarially testing the model,” the ChatGPT creator said.īut Ms Tobac fears adversaries may find ways to skirt rules. OpenAI said its teams were implementing rules to limit potentially harmful use of Sora such as showing extreme violence, celebrity likeness, or hateful imagery. In their post, they discuss rules they’re implementing to limit adversarial use of this text-to-video tool like limiting extreme violence, celebrity likeness, hateful imagery, etc.īut take my example above, prompting this AI tool for “a video of a very long line of people… /TYRwvJq2uY In the context of elections, she said such a tool may be misused to show “unimaginably long lines in bad weather” to convince people it’s not worth it to head out to vote. “My biggest concern is how this content could be used to trick, manipulate, phish, and confuse the general public,” ethical hacker Rachel Tobac, a member of the technical advisory council of the US government’s Cybersecurity and Infrastructure Security Agency (CISA), posted on X.Įven though OpenAI acknowledged risks associated with the widespread use of the tool, stating it was “taking several important safety steps ahead of making Sora available in OpenAI’s products, Ms Tobac said she was “still concerned”.Ĭiting examples of how the tool may be misused, she said adversaries may use the AI tool to build a video that appears to show a vaccine side effect that doesn’t exist. Gorgeous sakura petals are flying through the wind along with snowflakes.”Īnother video made using the tool and shared by OpenAI chief Sam Altman shows ultrarealistic wooly mammoths treading through a snowy landscape with snowcapped mountains at a distance.Įxperts have already raised numerous concerns about the misuse of such AI technology, including deepfake videos and chatbots in spreading political misinformation ahead of elections. The camera moves through the bustling city street, following several people enjoying the beautiful snowy weather and shopping at nearby stalls. The very lifelike video was generated by the AI tool from a detailed text prompt: “Beautiful, snowy Tokyo city is bustling. One example video shows two people, seemingly a couple, walking through a snowy Tokyo street with their backs to the “camera” walking. OpenAI shared multiple sample videos that were made using the AI tool, which looked surreal. The AI tool, named Sora, can be used to create videos of up to 60 seconds with “highly detailed scenes, complex camera motion, and multiple characters with vibrant emotions,” the ChatGPT company said in a blog post on Thursday. OpenAI has unveiled a new tool to make ultra-realistic artificial intelligence-generated videos from text inputs, sparking concerns about such AI systems being misused to manipulate voters ahead of elections.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |