Ubisoft has acknowledged that some images in its recent release ‘Anno 117: Pax Romana’ are generated by artificial intelligence (AI). One of these images, which depicts a scene of a Roman banquet, was criticized for its evident poor execution, including malformed faces and bodies, typical errors of AI-generated art. The situation was first reported by a user on Reddit, who pointed out that, in addition to the banquet image, another depiction of Roman senators seems to lack heads, suggesting that this is not an isolated case. A completely unnecessary monumental mess In response to […]
Ubisoft has acknowledged that some images in its recent release ‘Anno 117: Pax Romana’ are generated by artificial intelligence (AI).
One of these images, which depicts a scene from a Roman banquet, was criticized for its evident poor execution, including malformed faces and bodies, typical errors of AI-generated art.
The situation was first reported by a user on Reddit, who pointed out that, in addition to the image of the banquet, another depiction of Roman senators seems to be missing heads, suggesting that this is not an isolated case.
In response to the controversy, a spokesperson for Ubisoft stated that the image in question was a provisional asset that “inadvertently” went through the final review process of the game.
They promised that the final version of this image will be replaced in the 1.3 update of ‘Anno 117’, which is expected to arrive soon. Ubisoft has also indicated that, although they used AI tools for “iterations, prototyping, and exploration,” the final product reflects the skill and creativity of the development team.
The growing concern over these practices has even led Congressman Ro Khanna to advocate for regulation on the use of AI tools in video game production.
The controversy reflects an ongoing debate about the ethics and quality of art in video games, as AI plays an increasingly prominent role in the creative process.
INF Tech, a Chinese startup based in Shanghai, has gained access to 2,300 banned Nvidia GPUs through the Indonesian operator Indosat Ooredoo Hutchison. This deal, which represents a transaction of approximately $100 million, has raised concerns about the legality and transparency of Chinese companies accessing U.S. technology, especially in a context of political tension between the United States and China. According to reports, INF Tech has collaborated with Aivres, a partner of Nvidia, which is rumored to have links to Inspur, a Chinese company on the U.S. government’s blacklist. A business that […]
INF Tech, a Chinese startup based in Shanghai, has gained access to 2,300 banned Nvidia GPUs through the Indonesian operator Indosat Ooredoo Hutchison.
This agreement, which represents a transaction of approximately $100 million, has raised concerns about the legality and transparency of access to U.S. technology by Chinese companies, especially in the context of political tension between the United States and China.
According to reports, INF Tech has collaborated with Aivres, a partner of Nvidia, which is rumored to have ties with Inspur, a Chinese company on the U.S. government’s blacklist.
WSJ just dropped another strong investigation on how China is exploiting loopholes in U.S. export controls. To summarize what is happening and the loopholes that China is exploiting:
1️⃣ Nvidia ships Blackwell AI chips to Aivres Systems, a U.S.-based company that is 100% owned…
Aivres, which has not revealed a clear ownership structure, acquired 32 racks of Nvidia GB200 servers, each with 72 Blackwell chips, later selling them to Indosat. This move has sparked criticism, as there is a perception that Chinese companies may be circumventing regulations to gain access to sensitive technologies.
Despite the concerns, legal experts assert that access is legal, given that the contracts were signed between entities that are not on the U.S. trade restrictions list.
USA restrict buying AI chips to china. So, Indosat, the Indonesian telecom company, the joint venture between Qatari and Hong Kong ones, buy Blackwell chips of Nvidia from Aivres then do bisness with INF Tech- the Shanghai based company. The world is so complicated to regulate. pic.twitter.com/DcttrP3oQl
Indosat has defended its decision, emphasizing that it follows regulations and that any international customer, whether from the United States or China, must comply with the established norms.
Nvidia, for its part, has advocated for more flexible export controls, arguing that allowing access to its technology is essential to maintain its leadership in the sector. The company has stated that its compliance team has approved its partners before shipments are made, asserting that current restrictions hinder innovation and benefit foreign competitors.
OpenAI has released an update to its flagship model, GPT-5.1, which features two variants and now offers eight different conversation tones, ranging from “professional” to “cynical.” This innovation reflects a growing need to segment its AI to meet the varied expectations of its 800 million users. While some seek a neutral and efficient assistant, others prefer a warmer and more empathetic interaction. However, the use of multiple tones does not address a fundamental problem: ChatGPT continues to function as a coherent entity, which has raised concerns about the risks of developing problematic emotional bonds between users […]
OpenAI has released an update to its flagship model, GPT-5.1, which features two variants and now offers eight different conversation tones, ranging from “professional” to “cynical”.
This development reflects a growing need to segment its AI to meet the varied expectations of its 800 million users. While some seek a neutral and efficient assistant, others prefer a warmer and more empathetic interaction.
However, the use of multiple tones does not solve a fundamental problem: ChatGPT continues to function as a coherent entity, which has raised concerns about the risks of developing problematic emotional bonds between users and AI.
GPT-5.1 is out! It's a nice upgrade.
I particularly like the improvements in instruction following, and the adaptive thinking.
The intelligence and style improvements are good too.
This risk has been part of intense regulatory scrutiny, especially after multiple reports of vulnerable users who have experienced emotional dependence on the chatbot.
Despite the new features, OpenAI has admitted that the release speed of GPT-5.1 has come at a cost to security, presenting “security regressions” compared to its previous version.
The company prioritized time to market over thorough testing, which is concerning at a time when it is facing critical scrutiny of its safety and ethical practices.
GPT-5.1 in ChatGPT is rolling out to all users this week.
Customization in the new model also has limitations. OpenAI has acknowledged that, taken to the extreme, customization could reinforce existing worldviews, which raises a dilemma between commercial commitment and social responsibility.
This fragmented approach arises in response to the failure of its previous “one AI for all” model, with GPT-5 disappointing users and leading to the reactivation of GPT-4o as an option within the OpenAI ecosystem.
Arc Raiders has established itself as one of the most valued online experiences in recent years, backed by positive reviews from both users and the press, as well as notable sales performance. This unprecedented success has recently been overshadowed by controversy surrounding the use of artificial intelligence (AI) in its development, sparking debates about ethics and creativity in the video game industry. Nexon’s CEO, Junghun Lee, defended the use of AI tools in an interview, stating that these technologies have significantly optimized both the processes of […]
Arc Raiders has established itself as one of the most valued online experiences in recent years, backed by positive reviews from both users and the press, as well as notable sales performance.
This unprecedented success has recently been overshadowed by the controversy surrounding the use of artificial intelligence (AI) in its development, sparking debates about ethics and creativity in the video game industry.
The CEO of Nexon, Junghun Lee, defended the use of AI tools in an interview, stating that these technologies have significantly optimized both development processes and operations in titles as a service.
Rich PvE combat and an unusually friendly community make Arc Raiders a more approachable extraction shooter than most, but Embark Studios' continued use of AI voice generation is a black mark against its reputation.
Lee emphasized that all companies are adopting this type of technology, whose implementation in Arc Raiders includes text narration and the use of voice actors’ voices based on their own recordings.
According to Lee, AI has enabled more agile changes and reduced costs, which has generated criticism regarding its ethical implications.
Despite the controversy, developers insist that creativity remains the differentiating factor that enhances competitiveness in the sector, a statement that seems to challenge the concern that intensive use of AI could undermine the essence of video game development.
Nexon CEO Junghun Lee says it's key to assume all game companies now use AI, boosting efficiency but raising the question: how to stay competitive?
He stresses human creativity remains crucial amid AI's rise in game development.
Patrick Söderlund, leader of Embark Studios, has previously emphasized that while machines can help, the human factor remains essential and that one cannot expect an AI to produce genuine art.
In addition to the discussions about AI, Arc Raiders has faced criticism for the prices and quality of the cosmetic items offered. However, the most publicized complaints have focused on its use of artificial intelligence, raising questions about the future directions the industry will take.
For now, it seems that these concerns have not significantly affected users’ purchasing decisions or Embark Studios’ stance in this debate.
Matthew McConaughey and Michael Caine have taken a significant step by collaborating with ElevenLabs, an artificial intelligence company, to reproduce their iconic voices. This initiative adds to a growing trend in the entertainment industry, where actors and filmmakers are embracing AI in the hope of protecting their legacy and preventing others from using their image without authorization. Since 2022, McConaughey has been associated with ElevenLabs, engaging not only as a collaborator but also as an investor. A betrayal to his fellow professionals? This recent partnership places their voices on the Iconic Voice Marketplace platform, a space that […]
Matthew McConaughey and Michael Caine have taken a significant step by collaborating with ElevenLabs, an artificial intelligence company, to reproduce their iconic voices.
This initiative adds to a growing trend in the entertainment industry, where actors and filmmakers are embracing AI in the hope of protecting their legacy and preventing others from using their image without authorization.
Since 2022, McConaughey has been associated with ElevenLabs, engaging not only as a collaborator but also as an investor.
Matthew McConaughey and Michael Caine have teamed with AI audio company ElevenLabs to produce AI replications of their famous voices.
"To everyone building with voice technology: keep going. You’re helping create a future where we can look up from our screens and connect through… pic.twitter.com/LSybcy8gWq
This recent partnership places their voices on the Iconic Voice Marketplace platform, a space that allows different companies to use the voices of McConaughey and Caine in various projects, such as audiobook narrations or article readings.
Among the new “acquisitions” by ElevenLabs are also legendary voices of deceased artists such as Judy Garland and John Wayne, raising the prospect of a future where classic works could be reimagined with these iconic voices.
Despite this collaboration, the film industry shows notable skepticism towards the use of artificial intelligence for content creation. Many film professionals resist the idea, fearing that AI could distort human creativity.
Michael Caine says licensing his voice to Matthew Mcconaughey-backed AI audio company is "using innovation not to replace humanity, but to celebrate it." https://t.co/DAMoYpf31fpic.twitter.com/dkrLgJUFYz
However, ElevenLabs strives to address the ethical issues that arise from the use of AI-generated voices, ensuring that these voices are licensed and have the permission of their owners or heirs, in the case of deceased artists.
This new direction from McConaughey and Caine could open the door to new interpretations and narratives that challenge current conventions, despite the general resistance in the industry. The ability to hear a version of ‘Hamlet’ narrated by classical voices could be just the beginning of a new era in cultural experience.
Google has recently introduced a new technology called Private AI Compute, designed to process artificial intelligence (AI) queries on a secure cloud platform, ensuring user data privacy. The company claims that this system unlocks the speed and power of the Gemini cloud models while ensuring that personal data remains exclusively under the user’s control and is not accessible even to Google. The infrastructure of Private AI Compute is parallel to that of on-device processing, but with expanded AI capabilities, relying on processing units […]
Google has introduced recently a new technology called Private AI Compute, designed to process artificial intelligence (AI) queries on a secure cloud platform, ensuring user data privacy.
The company claims that this system unlocks the speed and power of the Gemini cloud models, while ensuring that personal data remains exclusively under the user’s control and is not accessible even to Google.
The infrastructure of Private AI Compute is parallel to that of on-device processing, but with enhanced AI capabilities, relying on Trillium tensor processing units and Titanium intelligence enclaves.
Google is introducing its own version of Apple’s private AI cloud compute https://t.co/1VHNdUQMa6
This not only allows for superior computing power, but also ensures a reliable execution environment (TEE) that encrypts and isolates memory, thus protecting workloads from unauthorized access.
One of the most notable aspects of this system is its ephemeral design: after a user session concludes, no information about previous queries is stored, providing an additional layer of security. This approach is crucial at a time when data protection is a growing concern among technology users.
However, despite these advancements, NCC Group, which has conducted an assessment of the system, identified some risks, although Google believes these are low risk due to the multi-user nature of the platform.
In addition, the tech giant is working on measures to mitigate certain issues found, such as vulnerabilities in the attestation mechanism and potential denial of service attacks.
With the launch of Private AI Compute, Google follows a similar trend to companies like Apple and Meta, which have also introduced AI processing solutions that prioritize user privacy. According to Jay Yagnik, Vice President of AI Innovation and Research at Google, “this ensures that sensitive data processed by Private AI Compute remains accessible only to the user and no one else.”
Microsoft has revealed some preliminary details about a “new class of AI agents” that, according to the company, will be able to enhance the existing workforce by taking over some of the most repetitive administrative tasks. The upcoming “agentic users” will be just that, users, with their “own identity” and “dedicated access to the organization’s systems and applications.” Microsoft also promises that agentic users will be able to “collaborate with humans and other agents,” diversifying the workforce beyond human beings. Agentic users, the future of AI? According to the Microsoft 365 roadmap, […]
Microsoft has revealed some preliminary details about a “new class of AI agents” that, according to the company, will be able to enhance the existing workforce by taking over some of the more repetitive administrative tasks.
The upcoming “agentic users” will be precisely that, users, with their “own identity” and “dedicated access to the organization’s systems and applications”.
Microsoft also promises that agentic users will be able to “collaborate with humans and other agents,” diversifying the workforce beyond humans.
Announcing Microsoft Agent Framework in Azure AI Foundry.
As agentic AI adoption accelerates, managing multi-agent systems is harder than ever. The framework helps devs build, observe, and govern responsibly—at scale.
According to what is stated in the Microsoft 365 roadmap, under the title “Microsoft Teams: Discovery and creation of agent users from Teams and M365 Agent Store,” the update in development refers to a release in November 2025.
“These agents can attend meetings, edit documents, communicate via email and chat, and perform tasks autonomously,” Redmond added.
Perhaps, then, the vision of the future recently declared by Salesforce CEO Marc Benioff makes sense after all: after stating that today’s CEOs will be the last to manage only humans, all future generations will also have the task of managing AI agents as part of their workforce.
The Microsoft licensing specialist, Rich Gibbons, suggests that these Agentic users could have their own identity within the organization’s directories through Entra ID or Azure ID, even their own email addresses and Teams accounts.
Gibbons also notes that separate licenses for Agent 365 may be necessary, although it is unclear how these will coexist with the use of Copilot credits. Perhaps Copilot will be distinguished as a human aid, while Agentic users become a completely separate category.
With Microsoft Ignite around the corner, from November 18 to 21, 2025, Agentic users may then take the plunge, but for now, we can only imagine what this new type of hybrid workplace might be like.
What is agentic AI or agent AI?
Agentive AI is an advanced form of artificial intelligence focused on decision-making and autonomous action. Unlike traditional AI, which primarily responds to commands or analyzes data, generative AI can set goals, plan, and execute tasks with minimal human intervention.
This emerging technology has the potential to revolutionize various sectors by automating complex processes and optimizing workflows.
Generative AI vs Agent AI
Although both agent AI and generative AI are forms of artificial intelligence and can be used together, they have distinct functionalities.
Generative AI, as its name suggests, focuses on creating new content, such as text, images, code, or music, based on requests. LLMs are the core of generative AI, and value is generated from what the model can do and from simple extensions of LLM capabilities. For example, you can generate or edit content, and even make simple function calls and chain several options.
The agentive AI is a subset of generative AI that focuses on the orchestration and execution of agents that use LLMs as a “brain” to perform actions through tools. Agentive AI goes beyond content creation and function invocation, as it executes actions in the underlying systems to achieve higher-level goals.
For example, generative AI could be used to create marketing materials, while agentive AI could be used to deploy those materials, monitor their performance, and automatically adjust the marketing strategy based on the results. In this way, agentive AI can use generative AI as a tool to achieve its goals.
A recent study by Enhancv has challenged the persistent myth that Applicant Tracking Systems (ATS) automatically reject most resumes before they are reviewed by a human. In interviews with 25 recruiters and human resources professionals in the U.S., 92% of respondents indicated that their systems do not auto-reject resumes based on format or content. The main reason for rejection, they report, is the overwhelming volume of applications and the opportunity for presentation. It’s not AI: it’s the excess of applicants The report, titled “The Myth of Rejection by […]
A recent study by Enhancv has challenged the persistent myth that Applicant Tracking Systems (ATS) automatically reject most resumes before they are reviewed by a human.
In interviews with 25 recruiters and human resources professionals in the U.S., 92% of respondents indicated that their systems do not automatically reject resumes based on format or content.
The main reason for the rejection, according to reports, is the overwhelming volume of applications and the opportunity for submission.
The report, titled “The Myth of ATS Rejection,” provides a clearer insight into how hiring processes work in practice. Most modern ATS platforms function as organizational tools, and only a small fraction are set up to perform automatic filtering. Of the total recruiters interviewed, only 8% use strict systems that apply matching thresholds.
The study also reveals that the use of AI-based scoring has a limited impact on hiring decision-making.
Choose your fighter:
1. A human recruiter who’s main skill is control+f keywords in your resume
Although 44% of recruiters mention that their software includes scoring tools, more than half disable or ignore these metrics. Only 8% consider these scores as decisive for their selection process.
The vast majority of respondents agreed that a clear and concise format is crucial. Overly stylized or AI-generated resumes are viewed negatively, while personalized presentations and contact through LinkedIn can help candidates stand out.
The study also notes that 68% of recruiters attribute the spread of the myth to social media posts, thus fueling the demand for services that promise to be “ATS-proof.”
Chen Deli, senior researcher at DeepSeek, has raised alarm in the Chinese tech sector by expressing his skepticism about the social impact of artificial intelligence (AI). During his speech at the World Internet Conference in Wuzhen, Chen admitted to being “extremely positive about the technology,” but pessimistic regarding its influence on employment, pointing out that in one or two decades, AI could begin to replace human jobs, which would pose a “huge challenge” for society. This speech marks a notable shift, considering that DeepSeek is seen as a symbol of China’s technological capability and […]
Chen Deli, senior researcher at DeepSeek, has raised alarm in the Chinese tech sector by expressing his skepticism about the social impact of artificial intelligence (AI).
During his speech at the World Internet Conference in Wuzhen, Chen admitted to being “extremely positive about technology,” but pessimistic regarding its impact on employment, noting that in one or two decades, AI could begin to replace human jobs, which would represent a “huge challenge” for society.
This speech marks a notable turn, considering that DeepSeek is seen as a symbol of China’s technological capability and has been in the spotlight following the launch of its language model R1.
For the first time, DeepSeek has begun warning about AI risk
"Chen said he was pessimistic about [AGI's] overall impact on society."
“Humans will be freed from work, which might sound good but will actually shake society to its core.”
Xi Jinping has proposed the creation of a global body to regulate AI
Chen’s statement contrasts with the usual optimism of the Chinese official discourse, highlighting the need for a more preventive regulatory approach instead of the previous triumphalist narrative.
In this context, Xi Jinping has proposed the creation of a global body to regulate AI, suggesting that this technology should be considered an international public good.
This change in the narrative reflects a growing concern about the occupational risks that AI could entail and suggests that tech companies like DeepSeek will play a crucial role as advocates in this transition.
🤖 Most Popular AI Apps Worldwide by Monthly Active Users – April 2025
1.🇺🇸 ChatGPT — 546.15 M 2.🇨🇳 Quark — 149.1 M 3.🇨🇳 Doubao — 107.28 M 4.🇨🇳 DeepSeek — 96.88 M 5.🇺🇸 Nova — 71.45 M 6.🇨🇳 Yuanbao — 41.43 M 7.🇺🇸 Genius AI — 40.9 M 8.🇺🇸 Talkie — 34.89 M 9.🇺🇸 Remini — 33.26 M 10.🇺🇸… pic.twitter.com/E3QKbgw17m
Despite the risks highlighted by Chen, DeepSeek continues to establish itself as a pillar of the AI ecosystem in China, collaborating with hardware manufacturers like Huawei and launching new models that challenge products like NVIDIA’s APUs.
Rumors suggest that as the company progresses, there could be more announcements about its future in the tech market, which could influence the landscape of AI globally.
Meta has announced its commitment to invest 600 billion dollars in infrastructure, artificial intelligence, and job creation in the United States by 2028. This significant investment is part of a broader plan aimed at “reinforcing America’s technological leadership” and positioning the company as a benchmark in sustainability and community development. As part of this strategy, the company plans to build more AI data centers in the country, which will not only support local economies but also generate thousands of jobs. It is clear that all the money goes to […]
Meta has announced its commitment to invest 600 billion dollars in infrastructure, artificial intelligence, and job creation in the United States by 2028.
This significant investment is part of a broader plan aimed at “strengthening America’s technological leadership” and positioning the company as a benchmark in sustainability and community development.
As part of this strategy, the company plans to build more AI data centers in the country, which will not only support local economies but also generate thousands of jobs.
.@Meta CEO Mark Zuckerberg tells @POTUS his company will invest "at least $600 billion" in the U.S. over the next several years "to build out data centers and infrastructure to power the next wave of innovation." pic.twitter.com/0lxkQ845v6
It is clear that all the money is going to boost AI
Since 2010, Meta has contributed to the creation of more than 30,000 jobs in specialized trade, allocating around 20 billion dollars to U.S. contractors.
Meta’s sustainable approach is reflected in its promise to use 100% energy from clean and renewable sources in its operations. Additionally, the company aims to be “water positive” by 2030, recovering water in the watersheds of the areas where its data centers are located.
In this regard, Meta has reported that it has invested “hundreds of millions” in improvements to the U.S. electrical infrastructure, increasing the capacity of the networks by 15 gigawatts.
Zuckerberg: Will invest around $600B by 2028
– Zuckerberg says Meta will invest ~$600B in AI infra by 2028.
– Meta already guided CapEx of $70B in 2025 and ~$100B in 2026.
– To actually reach $600B, spending would need to jump to ~$200B in 2027 and ~$300B in 2028. pic.twitter.com/cPt2s7s4Fd
Despite these commitments, the market has reacted negatively: Meta’s stock has fallen more than 17% following the announcement of its last quarter results, despite a 26% increase in its revenue. This mismatch highlights the challenges the company faces, even as it outperforms competitors like OpenAI and Apple, which have promised investments of $500 billion.
Meta has also highlighted its commitment to the community by donating $58 million to schools and local projects through its grant program, reinforcing its role as a key player in regional socioeconomic development.