Is generative AI the new NFT or is there something more behind the disposable content?

Why hire someone who takes twenty days to make a poster when the machine can do it in twenty seconds? Sure, the characters have six fingers, the background looks blurry, and there are a couple of obvious rip-offs, but who cares as long as you have speed in capitalism.

We were promised that AI would be used to help artists and give them more free time to pursue their true passion. And, of course, when it comes down to it, what it has actually done is allow companies to directly do without them. Why hire someone who takes twenty days to make a poster, when the machine can do it in twenty seconds? Yes, the characters have six fingers, the background is blurry, and there are a couple of obvious plagiarisms, but who cares when speed is valued in capitalism.

ChatGPT DOWNLOAD

And now, videos

Yesterday’s surprise was that, in addition to copied images and infamously bland text, AI can now also create videos. But not just any videos: as we have seen, and at least for now, what it creates is simple and plain content. The kind you can watch while scrolling through your phone, perfect for doing the dishes, ideal for not paying the slightest attention. Just like our parents had ‘Sálvame’, you know. Videos without any intentionality for techies with pretensions who ignore the art of creation, well, simple and plain content.

And the content itself is not bad at all. Every year, we consume hundreds and hundreds of hours of mindless entertainment that doesn’t contribute anything. Series, movies, video games, comics, and disposable books constantly pile up on our to-do lists for our leisure time. What we should consider is whether we really want more content continuously, like an infinite tap of sequels created by a machine.

It’s not that science fiction writers haven’t been warning us for decades that leaving our leisure and our destiny in the hands of machines was a bad idea. And yet, when the opportunity has arrived, there are not a few who have already discarded human authors to embrace the infinite mediocrity of AI, which, in turn, would be nothing without human ingenuity. A faded photocopy without grace, genius or art.

Regulations will come, of course. And price increases from AI companies. And some country that skips the paperwork and offers millions of mediocrities at zero cost, like a continuous executor of videos of YouTube of dancing babies. Infinite videos based on something created by the human hand that they are only capable of copying, but never of convincing or proposing interesting twists. Pure content to have in the background while you look at the latest meme on Twitter.

And what will happen when generative AI in video improves its quality and can be presented as evidence in trials? How will we distinguish reality from fiction? Yes, technologically it is an incredible achievement. Ethically, not so much. Whether we like it or not, whether they call us “Luddites” or not, we urgently need to set limits before living in a continuous state of paranoia where AI further muddies the waters we are already in. The consequences can be devastating. And no, you are not exempt from them.

ChatGPT DOWNLOAD

The future of AI lies in a trial between OpenAI and a newspaper

Since computers can’t do anything without making copies, the copyright law appears again and again in the history of the Internet…

Many things are happening in the world of generative AI, but perhaps the most important one is the growing number of copyright infringement lawsuits being filed against AI companies like OpenAI and Stability AI.

ChatGPT DOWNLOAD

All major AI generative models from all companies are trained with large amounts of data obtained from the entire Internet. In general, this data is stolen.

And major media companies, such as The New York Times and Getty Images, have filed lawsuits against these AI companies, stating basically that AI companies are stealing their work and profiting from it. Claims that amount to direct copyright infringement.

AI infringes copyright and that is very serious

The copyright law remains deeply rooted in the idea of making copies and regulating which copies are legal and which are not. Since computers cannot do anything without making copies, the copyright law appears over and over again in the history of the Internet, which allows anyone to make and distribute perfect copies faster than ever before.

But there is a brake to all that control provided by the copyright law: fair use. Fair use is included in the Intellectual Property Law and establishes that certain types of copies are allowed.

Since the law cannot predict what everyone will want to do, it has a four-factor test that courts can use to determine if a copy is a fair use, as explained in The Verge.

But the legal system is not deterministic or predictable. Any court can apply that test however they want, and a court’s decision on fair use does not set a precedent for the next court.

ChatGPT DOWNLOAD

This means that fair use is a situation that depends a lot and no one knows how many copyright lawsuits will go. Many of them seem like a coin toss, and when you add the amount of hype, uncertainty, and money that comes with AI, it becomes even more complicated.

There is a lot at stake. This is a potentially extinct event for the modern AI industry, on the level of what Napster and other file-sharing sites had to face in the early 2000s.

The gravy train is over: Dall-E 3 will add watermarks to its generated images

Open AI has decided to take decisive action against the unfair competition they have created. Dall-E 3, from now on, will add a watermark that will allow clearly distinguishing which images were created by real artists, and who were generated by those who believed they were artists by adding commands to an AI.

In this way, Open AI tries to solve one of the most pressing problems today regarding generative Artificial Intelligence images: that users who generate images with these tools cannot replace the work involved in a long experience and many hours of work on an artistic piece.

Dall-E Prompts Master Beta Download

Dall-E 3 against misuse

As Open AI has publicly explained, they have decided to add two elements to the images generated by AI. One of them would be the watermark on the image, a resource that will make the images clearly identifiable as AI-generated. The other one, the C2PA format metadata, which will provide information about this image generation.

These systems, really, will not be difficult to avoid for those who want to continue carrying out their generative images, but it will also mean a loss of quality in the image and, consequently, a loss of value as generated content.

The dilemma of generative AI

Generative AI has had many problems related to this issue from the beginning. Especially, since the jump to GPT-3 iteration, which represented a big qualitative leap, systems like ChatGPT or Dall-E started to be massively used by users, either in a leisurely way or trying to emulate the work of other people, even claiming that it takes “a lot of effort” to choose the right words for these Artificial Intelligences to do a better job. However, their effort is still infinitely inferior to that of a true professional.

Therefore, thanks to changes like this measure carried out by Open AI, those who particularly use their tools to generate images and pass them off as truly original artistic creations will see their system truncated. In fact, the European Union is also making efforts to legislate this matter properly so that citizens are adequately protected against certain uses and abuses by AI.

Dall-E Prompts Master Beta Download

Sam Altman is not satisfied with AI, he also wants to compete with Nvidia and Intel

One of the main costs and limitations when running AI models is having enough chips to handle…

It is important to note that the AI business is not only software, but also hardware. In fact, the current limiting factor is the latter, as there is no manufacturing that can sustain the current demand for artificial intelligence chips.

ChatGPT DOWNLOAD

A new report from Bloomberg states that OpenAI CEO Sam Altman’s efforts to raise billions for an AI chip company are aimed at using that money to develop a “network of manufacturing factories” that would span the globe and involve working with unidentified “leading chip manufacturers.”

One of the main costs and limitations when running AI models is having enough chips to handle the calculations of bots like ChatGPT or DALL-E, which answer questions and generate images.

Altman wants money for his big project

Nvidia’s value surpassed one trillion dollars for the first time last year, partly due to the virtual monopoly it has, as GPT-4, Gemini, Llama 2, and other models heavily rely on their popular H100 GPUs.

As a result, the race to manufacture more high-power chips to run complex AI systems has only intensified. The limited number of factories capable of manufacturing high-end chips forces Altman or anyone else to bid for capacity years before needing it to produce the new chips.

And facing companies like Apple requires investors with a lot of money to take on costs that the non-profit organization OpenAI cannot yet afford. SoftBank Group and the AI holding company G42, based in Abu Dhabi, have held discussions to raise funds for Altman’s project.

Other companies that develop AI models have also ventured into manufacturing their own chips. Microsoft, an investor in OpenAI, announced in November that it has built its first custom AI chip for training models, closely followed by Amazon, which announced a new version of its Trainium chip.

ChatGPT DOWNLOAD

AWS, Azure, and Google also use Nvidia’s H100 processors. This week, Meta’s CEO, Mark Zuckerberg, stated that “by the end of this year, Meta will own over 340,000 Nvidia H100 GPUs,” as the company pursues the development of artificial general intelligence (AGI).

Nvidia has already announced its next generation of GH200 Grace Hopper chips to expand its dominance in this field, while its competitors AMD, Qualcomm, and Intel have released processors designed to power AI models in laptops, phones, and other devices.

OpenAI removes the fine print regarding the “military” use of its AI technology

Although the alternation of policies is interpreted as a gradual weakening of the company’s stance towards collaboration with the military…

OpenAI, the creator of ChatGPT, has modified the fine print of its usage policies to remove the specific text related to the use of its AI technology or large language models for “military and warfare purposes”.

ChatGPT DOWNLOAD

Before the change in the guidelines on January 10th, the usage policy specifically prohibited the use of OpenAI models for weapons development, military, and warfare, as well as content that promotes, encourages, or depicts self-harm acts.

OpenAI claims that the updated policies summarize the list and make the document more “readable”, while providing “specific guidance for each service”.

Will it really be easier to use technology militarily?

The list has now been condensed into what the company calls Universal Policies, which prohibit anyone from using their services to harm others and prohibit the reuse or distribution of any results from their models to harm others.

Although the alternation of policies is interpreted as a gradual weakening of the company’s stance towards collaboration with defense or military-related organizations, several experts, including Sam Altman, CEO of OpenAI, have already highlighted the “border risks” posed by AI.

Although we still have to see its implications in real life, this change in wording comes just as military agencies around the world are showing interest in using AI.

The explicit mention of “military and war” in the list of prohibited uses indicated that OpenAI could not work with government agencies such as the Department of Defense, which often offers lucrative contracts to contractors.

Currently, the company does not have a product that can directly kill or cause physical harm to anyone. However, as The Intercept stated, their technology could be used for tasks such as writing code and processing procurement orders for things that could be used to kill people.

ChatGPT DOWNLOAD

When asked about the change in wording of their policy, OpenAI spokesperson Niko Felix told the publication that the company “intended to create a set of universal principles that were easy to remember and apply, especially considering that our tools are already used globally by everyday users who can now also create GPT.”

OpenAI is trying to justify itself after the lawsuit received by The New York Times

OpenAI has decided to take action after being the subject of a complaint filed by The New York Times, accusing the website of using its articles to feed and train Artificial Intelligence that the company constantly improves. To address this, OpenAI has released an extensive press release explaining the situation.

The company OpenAI, under the umbrella of Microsoft, has wanted to ratify that it has not committed any illegality and that the complaint filed by the New York newspaper omits many conveniently unshared data. For that reason, the company led by Sam Altman justifies itself against the accusations of The New York Times.

The New York Times Download

OpenAI’s Explanation

OpenAI, through an extensive statement, wanted to reflect that, in addition to disagreeing with the lawsuit they have received from The New York Times, the well-known newspaper also did not present the complete version of the facts, which, according to OpenAI itself, justifies the controversy that has arisen and affirms that the lawsuit will not have legal grounds.

The reasons that OpenAI has given are as follows:

  1. OpenAI claims to be collaborating with the media to open new traffic avenues and create new opportunities. The company states that it aims to benefit the media with its service, contrary to what others believe, and highlights that they are developing specific tools for their work, such as an audio transcription service, improving the contextual knowledge of AI, or linking related content.
  2. The company also claims to be in line with the legal framework of all the territories in which they operate, including the United States, the European Union, and more. They claim to be legally training their AI, but despite this, they always offer an opt-out right to websites that do not wish to be processed by OpenAI, a right that The New York Times has been exercising since August 2023.
  3. Cases of “regurgitation”, that is, transcribing content from other media without permission, are isolated incidents, and as a bug, they are trying to fix it. OpenAI is aware that this problem is tricky, but emphasizes that it has never been an intentional feature on their part.
  4. The New York Times omits key points of the story. OpenAI highlights that the newspaper was negotiating with OpenAI in December 2023 to reach an agreement that would allow users to access The New York Times content through ChatGPT while respecting the request not to train the AI with their texts and linking to their portal to receive traffic. However, after the agreement was not reached, the NYT filed a complaint against the company after witnessing one of the cases that OpenAI attributes to ChatGPT’s “regurgitation”.
The New York Times Download

OpenAI refuses to give a seat on its board of directors to Microsoft or other investors

The recent headlines featuring OpenAI would provide enough material for a producer to make a biopic about it, and it seems that there will continue to be controversy between this company and Microsoft. In fact, OpenAI hasn’t hesitated to state that Microsoft won’t have a position on its board, nor will other minority investors.

It’s worth noting that, among other things, Microsoft holds 49% of OpenAI, positioning itself as its primary partner. However, with less than 51%, the company behind Windows cannot force OpenAI to make decisions it deems inappropriate. Thus, the creator of ChatGPT makes its stance clear, aiming to maintain independence.

ChatGPT DOWNLOAD

The OpenAI Dilemma

According to reputable sources like Reuters, OpenAI doesn’t want Microsoft or any of its investors to have positions within its administrative core, regardless of the weight that Bill Gates’ company holds in its shareholding. In fact, the exclusion of Microsoft also extends to other investors; if Microsoft, with 49% ownership, doesn’t have that privilege, neither will other investors with smaller stakes.

In recent weeks, significant changes have occurred within OpenAI, primarily driven by the controversy surrounding events involving Sam Altman—his dismissal, subsequent return, and the subsequent dismissal of other company executives who mediated in the initial removal of OpenAI’s CEO. However, despite the changes made and those planned for their corporate structure, neither Microsoft nor other investors like Kosla Ventures or Thrive Capital will have a place in the company’s core.

ChatGPT is one of the most relevant proper names in the world of technology in this 2023.

ChatGPT, OpenAI’s treasure

It’s true that most of the interest surrounding OpenAI nowadays stems from the results achieved in its generative AI, ChatGPT, capable of responding to all sorts of written commands and performing a wide range of possible actions. In fact, this AI is so powerful that, besides paving the way for many other competitive AIs, it’s also reshaping the business landscape and entering into the production arena of thousands of jobs.

Moreover, it’s not just a tool causing profound changes in the world, but its core, currently GPT-4, is serving as the foundation for the construction of many other iterations. That’s why OpenAI has seemingly overnight become one of the most anticipated companies in the world of technology.

ChatGPT DOWNLOAD

OpenAI creates team to fight potential 'catastrophic risks' of AI

Certainly, with the advent of AI and its constant evolution, it’s impossible not to think about the future and the implications this technology will have on our lives. This impact will be much greater than it is currently. Of course, the future involvement of artificial intelligence in our daily lives will have its positives and negatives, something that the people at OpenAI, the creators of ChatGPT and the excitement surrounding AI, are well aware of.

ChatGPT DOWNLOAD

The news we’re here to talk about is that OpenAI has assembled a team of catastrophic risk experts to curb AI itself if necessary. Additionally, it’s worth noting that Sam Altman himself has a nuclear panic button, so it appears evident that the company is aware of the need to take precautions for whatever might happen in the future. In fact, what concerns the creators of ChatGPT, in particular, is the possibility of artificial intelligence falling into the wrong hands, rather than AI itself.

OpenAI is preparing for whatever might happen with AI

Therefore, with this concern in mind, OpenAI has created a team to mitigate potential “catastrophic risks” that may arise from the misuse of AI, as they have described in a blog post on their website. The purpose of this team is to assess possible risks associated with this technology and take measures to address them before they become significant concerns. “We believe that the frontier of AI models may exceed the currently present capabilities in the most advanced models, and they have the potential to benefit all of humanity. But it also brings severe risks that are increasing,” they state.

The statement discusses various types of threats that could arise with AI, such as biological, chemical, and radioactive threats. Naturally, these mentions worry many, as these are serious issues. The upside to this is that if AI starts to develop something related to these threats, OpenAI could be prepared to prevent this technology from falling into the wrong hands. Now, this newly formed group holds double value, as it serves as a great tool to reassure regulatory bodies in the short term, while also emphasizing that artificial intelligence is something very, very vast and unpredictable.

ChatGPT DOWNLOAD

To conclude, it’s worth mentioning that the person heading this team will be the current director of MIT, Aleksander Madry, who will leave his current position to take on this new role with OpenAI. Madry has experience in AI language models, making him a great choice for this job. We will see what unfolds with this new team dedicated to monitoring the future consequences of AI on the course of the world and humanity, as its implications could be both beneficial and disastrous.