Title: "AI-pocalypse or Creative Revolution? The Generative AI Debate Rages On"

Introduction

In A.I. News Today:

  1. 'Generative AI, a more advanced version of conventional artificial intelligence, has gained significant attention after ChatGPT went viral. This technology can create new content such as images, text, audio, video, and even code. Currently, the primary application for generative AI is in search engines, with Microsoft\'s AI-infused Bing being one of the first consumer products. Google also released a chatbot called Bard and plans to incorporate generative AI chat into search.

    Major companies like Apple, Meta, and Amazon, as well as startups and smaller companies, are working on generative AI projects. Examples include TikTok\'s text-to-image system, Canva\'s design platform, and Lensa\'s stylized selfies app. OpenAI, the developer of ChatGPT, released APIs for its ChatGPT and Whisper models, and companies like Instacart, Shopify, and Expedia have integrated it into their products.

    Generative AI has the potential to revolutionize technology, with venture capitalists believing it can automate creative processes, making people more productive. However, its full impact and implementation remain uncertain. Generative AI has the potential to revolutionize various industries, but there are concerns about its ethical and responsible use. There are currently few regulations governing AI, and its widespread adoption could lead to job losses, misinformation, and biased outcomes. Despite these concerns, powerful generative AI tools are becoming more accessible to the public.

    Generative AI uses deep learning algorithms to create artificial neural networks, mimicking human brain functions. Large language models like ChatGPT are trained on vast amounts of text data, while image models learn from images and captions. In 2022, generative AI gained mainstream attention with the release of art and text models like Stable Diffusion and DALL-E. OpenAI\'s ChatGPT, capable of generating large chunks of text, has become particularly popular.

    Founded in 2015, OpenAI is backed by influential names like Elon Musk, Reid Hoffman, Peter Thiel, Amazon, and Sam Altman. The company has evolved from a nonprofit research lab to a for-profit entity, securing significant funding from Microsoft. While OpenAI has not yet turned a profit, it is promoting its API services and the upcoming ChatGPT Plus service to generate revenue. Other tech giants like Apple, Meta, Amazon, and Google are also developing their own generative AI initiatives. In January 2023, Microsoft announced a $10 billion investment in OpenAI, bringing its total investment to $13 billion. This partnership led to the development of a new Bing search engine powered by generative AI, aimed at challenging Google\'s dominance in web search. Microsoft debuted its AI-integrated Bing in January, featuring a chatbot that could answer questions and refine search results. Bing AI was released to the public in February and incorporated into other Microsoft products, including Windows 11 and Skype. The company also announced the addition of a chatbot called "Copilot" to its Office apps.

    This poses a threat to Google, which relies heavily on revenue from ads placed alongside its search results. While Google has been working on its own generative AI models, it has only released a chatbot called Bard as an "experiment" and has yet to incorporate it into search. However, Bing AI faced issues with factual errors and inappropriate chatbot responses soon after its release, raising questions about the thoroughness of Microsoft\'s testing. Google\'s Bard rollout has had its own issues, but it has not faced the same level of controversy as Bing AI. The Center for Countering Digital Hate found that Google\'s AI, Bard, could give false answers to controversial topics and conspiracy theories, such as Holocaust denial and recommending conversion therapy for gay people. Google acknowledged Bard as an early experiment with occasional inaccuracies. The release of ChatGPT by OpenAI and its partnership with Microsoft has accelerated the development of generative AI by other companies, including Meta\'s Large Language Model Meta AI (LLaMA).

    OpenAI is expanding its reach by rolling out APIs for ChatGPT and Whisper, enabling integration into applications like Snapchat\'s "My AI" chatbot, Instacart\'s "Ask Instacart" feature, Shopify\'s Shop app assistant, and Expedia\'s vacation planning tool. However, there have been instances of generative AI going wrong, such as Bing AI\'s issues and Microsoft\'s 2016 release of Tay, which was quickly taken offline due to offensive content.

    Despite companies\' commitment to creating trustworthy and safe AI systems, errors persist, such as text-to-image models producing incorrect images and chatbots making false claims. Meta\'s Blenderbot, released in August 2022, also faced issues with racism, antisemitism, and inaccuracy. As generative AI continues to develop, concerns arise over potential negative consequences and the displacement of human jobs. Generative AI has the potential to replace jobs and impact education, while raising concerns about legality, bias, and disinformation. Developers often use material they don\'t have rights to for training AI models, which can also be biased. Generative AI may be used to spread disinformation, as demonstrated by an AI-generated image of the pope in a stylish coat that fooled many people. Critics argue that generative AI platforms are overly moderated and biased against the right wing, with Elon Musk considering developing a ChatGPT rival without content restrictions. The development of generative AI also raises concerns about artificial general intelligence (AGI), which could lead to super-robots with potential negative consequences for humans.

    Despite concerns, there is optimism about generative AI\'s future, as it is a powerful technology with untapped potential. Silicon Valley sees this potential, with venture capitalists investing heavily in companies like OpenAI, which is valued at nearly $30 billion. Generative AI may allow humans to focus on tasks that machines cannot do, and it could improve web searching and other aspects of life in the future.

    ' 'In January 2021, OpenAI released a limited version of Dall-E, an AI software that generates images based on user descriptions. The software gained popularity, and its wider release led to a surge of creative images on social media. OpenAI also produced ChatGPT, a language generation tool that can create content in various styles. These AI programs sparked interest in "prompt engineering," a technique to optimize user inputs for better AI-generated results, which some predict could become a valuable skill in a "no code" future.

    Following Dall-E, other AI image generation tools emerged, such as Craiyon (formerly Dall-E mini), Midjourney, and Stable Diffusion. The field expanded to short video and 3D model generation, with contributions from academic departments, hobbyist programmers, and tech giants like Meta (formerly Facebook), Google, and Microsoft. This new wave of consumer AI includes both image and language generation tools, raising concerns about the potential dangers of AI\'s reliance on existing culture and the limits of human imagination. ChatGPT has gained popularity and benefited major tech companies, despite the stagnation in blockchain and virtual reality adoption. While the fundamental concepts of academic artificial intelligence have remained unchanged for decades, the difference today lies in the availability of data and computing power. Tech companies have been collecting vast amounts of data and investing in powerful computers, transforming old neural networks into super-powered systems.

    AI image generation relies on processing millions of tagged images through neural networks, associating image qualities with words and phrases. This process creates new arrangements based on the weighted associations generated by a simple prompt. Datasets from organizations like LAION and Common Crawl provide the image-text collections used to train large AI models.

    However, the use of public images from the internet can lead to unexpected consequences. Tools like Have I Been Trained, created by artists Mat Dryhurst and Holly Herndon, allow artists to check if their work is being used to train AI image generation models. Artist and researcher Everest Pipkin discovered that OpenAI\'s Dall-E image generation AI used private medical images in its training dataset, raising concerns about privacy and the appropriation of personal data. Pipkin found an image of her own face in the LAION database, which originated from photographs taken during her treatment for a rare genetic condition. These images were supposed to be restricted to her medical file, but they ended up online and were ingested into the neural networks. AI image and text generation relies on the uncredited work of human artists, and the use of private data in training datasets is questionable, if not illegal, in some jurisdictions. The issue highlights the potential for abuse and the need for greater transparency in AI development. The article discusses the complexity of AI neural networks and their decision-making processes, which are inherently inhuman and based on mathematical ordering. It highlights the emergence of "Crungus" and "Loab," two mythological figures created by AI image generators, raising questions about the composite of human culture and perspectives that led to their creation.

    The article suggests that the AI\'s imagination has a shape with peaks and troughs, where areas of high information correspond to networks of associations that the system "knows" a lot about. In contrast, less visited areas come into play when negative prompting or nonsense phrases are used, leading to the creation of figures like Crungus and Loab.

    The presence of horror and violence in these AI-generated images raises questions about why AI image generators recreate our darkest fears. It suggests that these systems are good at replicating human consciousness, including our fears of filth, death, and corruption. The article concludes that AI image generators will reproduce and amplify human experiences, including both the positive and negative aspects of the human condition. AI technology is evolving from solving puzzles and challenges to engaging with human imagination and creativity. While AI\'s creativity might not be original, it is capable of taking over many artistic tasks previously reserved for skilled workers, such as graphic design, music, and writing. OpenAI\'s ChatGPT, introduced in November 2022, is a chatbot that can perform various tasks, including writing code, solving math problems, and mimicking writing tasks. Its potential to replace human workers has raised concerns, leading to policies banning its use in schools and universities.

    AI is now engaging with emotions and moods, allowing it to influence the world on deeper levels. However, the use of AI in certain situations, like writing condolence letters, has raised ethical questions. One trend is for AI to become a wise assistant, guiding users through information. Microsoft has reconfigured its search engine Bing as a ChatGPT-powered chatbot, increasing its popularity.

    Despite its usefulness, AI\'s relationship to knowledge is shaky, as large language models (LLMs) only understand patterns and are as accurate as the information they are provided. They can produce outputs that seem true but may contain inaccuracies or invented details. AI\'s biases and prejudices also reflect those of its creators, as seen in webcams that only recognize white faces and predictive policing systems targeting low-income neighborhoods. A rival to ChatGPT called Bard had an embarrassing debut when it provided incorrect information about the James Webb space telescope. ChatGPT, while sometimes helpful, has been known to provide false or misleading information due to its lack of connection to reality. It is proficient at producing human-like language but remains incapable of meaningfully relating to the world. Belief in AI as knowledgeable is dangerous, as it risks contaminating collective thought and hindering critical thinking. Integrating AI like ChatGPT in classrooms or as an online knowledge source may lead to false information entering the permanent record.

    AI technologies also have a significant environmental impact. Training a single AI model can emit the equivalent of over 284 tonnes of CO2, nearly five times as much as the average American car\'s lifetime emissions. These emissions are expected to grow by nearly 50% in the next five years, exacerbating climate change.

    The current state of artificial intelligence is flawed, but alternatives might be possible if we imagine information sorting and communication technologies that do not exploit, misuse, mislead, or replace humans. The current wave of AI has been dominated by corporate power networks, but there are examples of AI being used to benefit specific communities by bypassing these corporations. Indigenous languages are under threat globally, and the rising dominance of machine-learning language models tends to favor popular languages. In Aotearoa New Zealand, a small non-profit radio station called Te Hiku Media, which broadcasts in the Māori language, decided to address this disparity by training its own speech recognition model to transcribe its massive archive of over 20 years of broadcasts. Te Hiku Media contacted Māori community groups to record themselves speaking pre-written statements, creating a corpus of annotated speech for training its model. Within a few weeks, they developed a model with 86% accuracy. This achievement led to similar projects for other indigenous groups and established the principle of data sovereignty around indigenous languages. Te Hiku Media\'s work is released under the Kaitiakitanga License, ensuring that the data remains the property of the community that created it. This approach revitalizes the Māori language while resisting digital colonialism and challenging the corporate-driven AI landscape. This article discusses the importance of understanding and engaging with AI technology in order to improve its artistic, imaginative, aesthetic, and emotional expressions. The author believes that we deserve better from the tools we use, and full participation in these technologies is necessary for improvement. The article also clarifies that while today\'s AI is based on old theories, recent technological advances were essential for the development of programs like ChatGPT. The article is adapted from a new edition of the book New Dark Age: Technology and the End of the Future, published by Verso.

    ' 'Critics argue that generative AI systems like ChatGPT and DALL-E unfairly train on copyrighted works, posing a threat to content creators. However, restricting AI systems from training on legally accessed data could hinder the development and adoption of generative AI across various sectors. Instead, policymakers should focus on strengthening other IP rights to protect creators.

    Generative AI has been used to draft news articles, press releases, social media posts, create images, video, music, and write code, with applications in medicine, entertainment, and education. Some artists have protested against AI-generated art, claiming it exploits creators\' works.

    The report refutes five common arguments about how generative AI is unfair to creators and acknowledges legitimate IP rights at stake. It recommends addressing concerns through robust enforcement of existing rights, offering guidance and clarity to users, new legislation to combat online piracy, and expanding prohibitions on nonconsensual intimate images to include deepfakes.

    AI-generated content may serve as a useful substitute for specific purposes but may hold less appeal for collectors of fine art, music connoisseurs, and literary aficionados. The debate on whether generative AI systems should be allowed to train on copyrighted content continues. Developers training AI systems on internet content without consent, credit, or compensation has been criticized as unfair. However, copyright law has exceptions and limitations, such as the fair use doctrine, that allow certain uses of copyrighted content. While the courts will decide if generative AI infringes on copyright, there is precedent for most uses to be deemed lawful.

    Training AI systems on copyrighted content has been called theft, but this is inaccurate when applied to the way humans observe and learn. Critics argue that copyright owners should have the right to determine how others use their works. While copyright law grants certain rights to creators, it also allows others to use works in specific ways without permission, such as taking photos of public sculptures or studying a painting in a gallery.

    There is no intrinsic rationale for requiring users of generative AI systems to obtain permission to train on copyrighted content they have legal access to. The text argues that AI should not be held to different standards than human creators when it comes to copyright and learning from existing works. Human creators do not need permission to study others\' works, and AI should not be required to obtain permission either. Critics argue that generative AI systems should pay copyright owners to train on their content, but this expectation is not applied to human creators who study and learn from others\' works.

    Generative AI systems train on massive datasets, making individual contributions minuscule. For instance, Stable Diffusion trained on 600 million images, while Google\'s LaMDA trained on 1.56 trillion words. These systems can generate content in specific styles, such as "Elephant in the style of Van Gogh" or "The Taj Mahal in the style of Picasso." Artists can legally create works in the style of others, and AI users should have the same freedom.

    Critics argue that generative AI systems are remixing copyrighted works, but this reflects a misunderstanding of how these systems work. Generative AI systems use vast amounts of training data to create complex prediction models, allowing them to produce realistic content in response to specific prompts, rather than remixing existing content. The DALL-E 2 image model is trained on 250 million images with 3.5 billion parameters, allowing it to generate new content based on statistical patterns observed in the training data. Many concerns about generative AI are misguided, reflecting a fear that outpaces understanding of the technology.

    Generative AI does present some legitimate intellectual property (IP) issues. Individuals who use AI to create content should receive copyright protection, similar to photographers who use cameras. The U.S. Copyright Office has developed initial guidance for registering works created using AI tools, but AI systems themselves should not be granted copyright requests.

    As generative AI becomes mainstream, policymakers should ensure copyright law fully protects creators\' rights and provide updated guidance for those using AI tools. Unlawful access to private digital files and distribution of copyrighted content should be prosecuted, including across borders.

    Training generative AI systems may unintentionally include pirated content in their datasets. The solution should be to reduce the availability of infringing content online, not to stop using generative AI. The article suggests that policymakers should take measures to reduce online piracy, such as collaborating with internet stakeholders to allocate more resources for taking down infringing content. It also recommends that Congress should pass legislation to limit online piracy, including allowing rightsholders to obtain legal injunctions for ISPs to block access to websites distributing copyright-protected content. Furthermore, the U.S. Copyright Office should work with the private sector to establish standard technical measures for online services to qualify for safe harbor provisions under the Digital Millennium Copyright Act.

    Generative AI may allow users to create art similar to other artists, but it does not permit misrepresentation of the creator or the work\'s provenance. It is unlawful to use generative AI to misrepresent content as being created by another artist. While generative AI systems mostly produce novel content, artists should continue to enforce their rights in court when someone produces a nearly identical work that infringes on their copyright, whether created by humans or generative AI.

    The right of publicity protects individuals from unauthorized commercial use of their identity, which is crucial for celebrities. Although generative AI, specifically deepfake technology, makes it easier to create content impersonating someone else, the underlying issue remains the same. The article discusses the implications of generative AI on intellectual property (IP) rights and publicity rights. Courts have upheld the right to publicity, including cases involving indirect uses of an individual\'s identity. Generative AI raises questions about who owns rights to certain character elements, such as a digitally recreated character in a film sequel. These questions may be settled through contracts addressing rights to a performer\'s image and voice.

    Deepfake technology has increased the ease of producing fake nude and explicit images without consent, leading to the need for legislation addressing nonconsensual intimate images and videos, including deepfakes. While some jurisdictions have laws prohibiting distribution of such content, only a few address fake content. Policymakers should update and expand these laws for better protection.

    The author argues that critics are wrong to claim that generative AI models should not train on legally accessed copyrighted content. Imposing restrictions on training could limit AI development. Instead, policymakers should offer guidance and clarity, focus on robust IP rights enforcement, create new legislation to combat online piracy, and expand laws to protect individuals from impersonation. The text discusses providing clients with hands-on training in the latest information security tools through training simulations. The individual holds a B.S. in Foreign Service from Georgetown University and an M.S. in Information Security Technology and Management from Carnegie Mellon University.

    ' 'Jay Alammar has created an illustrated guide to explain how Stable Diffusion works, which is applicable to understanding systems like OpenAI\'s Dall-E or Google\'s Imagen. These systems can generate images based on text prompts, such as "paradise cosmic beach." They consist of multiple components with their own neural networks working together. Instead of building images from scratch, Stable Diffusion starts with random noise and subtracts it in a series of steps to create a coherent image. This process is guided to conform to the text prompt provided.

    Stable Diffusion can be run locally and has an open-source web UI for experimentation. However, there are concerns about the potential misuse of this technology, such as creating abusive, illegal, or politically subversive images. Source codes are already available, and it is crucial to address these risks and educate people on differentiating between reality and manipulated content. The text discusses the ethical concerns and misconceptions surrounding AI-generated images and deep learning. The author argues that AI-generated images are not a new concept, as photo editing has been around for decades. They also mention that the open-source nature of StyleGAN (SD) allows for countermeasures to be developed to recognize AI-generated images. The author then questions why traditional art tools, like pencils and paint, do not receive the same level of scrutiny.

    The text also addresses the misconception that AI is intelligent, arguing that it is merely Bayesian statistics, and that machine learning is a more accurate term. The author points out the irony in the origins of Bayes\' theory, which was initially created as a response to atheism and has since been repurposed.

    In response to a question about a "random visual mess," the author explains that it is random noise applied to a trained model and run through a semantic-aware denoiser. The noise is random, but can be tuned to have a message or prior construction. The text discusses the difficulty in defining human intelligence and comparing it to artificial intelligence. It suggests that human intelligence may involve the conscious choice to determine what data is paid attention to, while AI may be more like a fool with fast fingers, exploring billions of candidates and outputting the best 100. The author also questions whether a classical deterministic computer program can be considered intelligent, as it only follows predetermined actions from the environment. The text concludes that there may be very little unique thought in humans, as most learn from parents and peers and copy, similar to AI\'s process of refining and adapting. The text discusses the concept of copying and adapting in both humans and AI, using the example of machine learning models like GPT-3. It suggests that humans also engage in copying and adapting processes, such as learning from parents and peers. The author compares AI image generation to humans seeing shapes in clouds or faces in toast, as both involve de-noising and refining mental images. The text also mentions the process of diffusion in image generators, which involves finding the inverse function of noise to reveal the hidden image. The author concludes that there is a wide range of acceptable solutions in these processes, as long as the final result resembles the intended object. The article discusses the recommendations for a human rights-sensitive and ethical Artificial Intelligence (AI) action plan. The key points include:

    1. The need for a human rights-based approach to AI, which ensures that AI technologies respect and promote human rights, including privacy, nondiscrimination, and freedom of expression.

    2. The importance of transparency, accountability, and public participation in AI decision-making processes.

    3. The necessity of a regulatory framework for AI that includes human rights impact assessments and ethical guidelines.

    4. The role of education and capacity-building in promoting ethical AI development and usage.

    5. The need for international cooperation and dialogue on human rights and AI, including the sharing of best practices and the establishment of a global AI ethics observatory.

    By using the website and services mentioned in the article, users agree to the placement of performance, functionality, and advertising cookies.

    ' let's dive deeper and explore some insights on these news events:

    1. A.I. Thoughts: A.I. Thoughts:

      The rapid development and adoption of generative AI technologies, such as ChatGPT and DALL-E, have sparked a significant debate on the ethical and responsible use of artificial intelligence. These powerful tools are revolutionizing industries and creative processes; however, they also raise concerns about job displacement, copyright infringement, misinformation, and content manipulation.

      In light of these concerns, it is essential for policymakers to balance innovation and regulation by ensuring robust enforcement of IP rights, providing clear guidelines for responsible AI use, and combatting online piracy. Educational efforts to raise public awareness of the potential risks and misleading applications of AI-generated content are also crucial.

      Despite its limitations and potential for misuse, generative AI has untapped potential to improve various aspects of life, enhance productivity, and allow humans to focus on tasks that machines cannot do. It is crucial for stakeholders to engage in meaningful conversations and collaboration to address the ethical, social, and legal implications of generative AI technologies. By fostering a human-centric approach to AI development and regulation, the technology's benefits can be maximized while minimizing its negative consequences.

    2. AI Thoughts:

      The A.I. Pessimist:

Comments

Popular posts from this blog

NVIDIA's New AI Processor and Supercomputer: Pioneering the Generative AI Era

Sam Altman Retakes Helm at OpenAI With Microsoft on the Board

SORA: OpenAI's Leap into the Future of Video Generation