Welcome back to AI Hungry, where the pulse of artificial intelligence innovation and regulation beats strongest. In our latest roundup, we delve into the Middle East's ambitious plans to transform into a global AI powerhouse, with the UAE at the forefront of this tech revolution. We also unpack the EU's unanimous approval of the landmark AI Act, a pioneering move that sets the stage for global AI regulatory standards.
From the strategic initiatives of the Middle East to the legislative corridors of Europe, these developments signal a pivotal moment in AI governance and deployment. Discover how these regions are shaping the future of AI, navigating the complexities of infrastructure, ethical applications, and international compliance.
Middle East Eyes Artificial Intelligence as Next Major Growth Sector
The United Arab Emirates (UAE) is leading the Middle East's charge into artificial intelligence (AI), aiming to become a global AI hub by 2031. The UAE has launched a national AI strategy, focusing on health care, transportation, and education, and has established the Artificial Intelligence and Advanced Technology Council to oversee AI development.
While the UAE and other Middle Eastern countries like Saudi Arabia and Israel are making significant strides in AI, there are concerns about the real-world application, infrastructure readiness, and the development of a skilled workforce. Experts suggest that the region's success in AI will require learning from global leaders and building local capacities to ensure sustainable and inclusive growth.
EU Unanimously Approves Landmark AI Act, Setting Global Regulatory Standards
The European Union has taken a significant step in the regulation of artificial intelligence by unanimously approving the EU AI Act. This groundbreaking legislation, which has been in the making for three years, is set to become the world's first comprehensive framework for AI governance. The Act categorizes AI systems based on their potential impact on civil liberties, with stringent rules or outright bans on those deemed high-risk. It also establishes hefty fines for violations, ranging from €7.5 million to €35 million, or a percentage of global annual revenue.
The Act's approval by the EU Member States is just the beginning, as it now moves to a plenary vote and subsequent formal adoption by ministers. The rules will be binding within two years, with specific timelines for different provisions. A dedicated EU AI Office will oversee enforcement, ensuring that AI development and deployment are responsible and trustworthy. The Act also addresses the use of biometric identification systems by law enforcement and imposes transparency obligations on developers of general-purpose AI systems, including models like ChatGPT.
🧠 Neuralink Successfully Implants Device in Human Patient. Elon Musk's Neuralink has implanted a brain-computer interface in a human for the first time, with the patient reportedly recovering well. The company, which faced scrutiny over animal testing, aims to enable thought-based communication. (Link)
🚀 Meta Unveils Enhanced Code Llama Model with 70 Billion Parameters. Meta has released an updated version of Code Llama, now with 70 billion parameters, offering three versions including one for Python and another for natural language inputs. The open-source models outperform rivals and are available for commercial use. (Link)
🛒 Amazon Unveils Rufus: A New AI-Powered Shopping Assistant. Amazon launches Rufus, an AI shopping assistant designed to make product searches and recommendations more conversational and intuitive for users. Currently in beta, Rufus will soon be available to more U.S. customers. (Link)
📱 Apple to Launch Generative AI in iOS Later This Year, Enhancing Siri and Apps. Apple CEO Tim Cook announced the introduction of generative AI to iOS, promising significant enhancements to Siri and other applications. This major update is expected to bolster Apple's competitiveness in the AI market. (Link)
🌐 AI2 Launches OLMo, a Fully Open-Source Large Language Model. The Allen Institute for Artificial Intelligence has released OLMo, a fully open-source large language model, including its training data, code, and benchmarks. This move aims to enhance transparency and understanding in AI research. (Link)
🚨 US Lawmakers Challenge DOJ Over Grants for Biased AI Policing Tools. US lawmakers urge the Department of Justice to stop funding predictive policing tools, citing concerns over racial bias and inaccuracy. They demand a review of the grant program and the establishment of evidence standards. (Link)