- The AI State
- Posts
- Trump Pressures EU On AI 🔴
Trump Pressures EU On AI 🔴
DeepSeek's US Future In Doubt, AI In California Bar Exam, Hidden AI Threat, Disinformation Defense
US retreats from disinformation defense just as AI-powered deception grows
Ed. Secretary Linda McMahon has 90 days to figure out AI integration in schools
Publishers push White House to address AI copyright concerns
National security concerns put DeepSeek’s future in the US at risk
AI companies reconsider safety commitments as Trump rolls back Biden-era regulations
US retreats from disinformation defense just as AI-powered deception grows

Funding AI education, industry immersion, and career assistance to those who have served our nation.
Deep dives
⚖️ California's Supreme Court has demanded an explanation from the State Bar regarding its unauthorized use of AI to create multiple-choice questions for the February bar exams. The State Bar secretly used AI to develop 23 out of 171 scored questions, raising serious concerns about test validity and transparency in this high-stakes professional certification. This controversy emerges amid other exam-related problems, including technical failures and requests for score adjustments, further complicating California's move away from standardized testing models used by most states. The court is now seeking detailed information about the AI implementation and quality control processes, highlighting the critical importance of question reliability in determining who can practice law.
🏫 President Trump's executive order mandates the integration of AI in American education from pre-K through adulthood, establishing a comprehensive framework to create an AI-literate population and workforce. Education Secretary Linda McMahon faces tight deadlines to develop implementation plans, while a multi-agency task force led by the Office of Science and Technology Policy will guide the initiative. The directive emphasizes creating a "culture of innovation and critical thinking" to maintain American leadership in AI development, with some school districts already implementing AI tools for communication and tutoring. This sweeping educational reform represents a significant shift in how America prepares students for an AI-dominated future, though challenges remain in execution given apparent knowledge gaps among key officials.
🔒 The US government is considering restrictions on DeepSeek, a Chinese AI platform, citing national security and data privacy concerns amid ongoing US-China technology tensions. Officials are exploring various measures, including banning the platform on government devices and potentially extending restrictions nationwide, following similar actions taken by other countries. The concerns primarily stem from the possibility that Chinese companies could be compelled to share user data with their government, while DeepSeek has also demonstrated problematic content moderation and censorship issues. This potential ban comes at a time when Chinese AI development is accelerating, with companies like Alibaba and Manus AI claiming significant competitive advancements that could reshape the global AI landscape.
🔄 The Trump administration has formally challenged the EU's proposed AI regulation, urging against adoption of the current AI code of practice. The dispute centers on transparency, risk-mitigation, and copyright requirements for AI developers, with US officials pushing back to protect American tech interests. This diplomatic tension highlights the growing divide between US and EU approaches to AI governance, with the US favoring a more permissive model while the EU takes a more precautionary stance. The outcome of this dispute could significantly impact global AI development standards and create uncertainty for technology companies operating in European markets. This confrontation represents a critical moment in the emerging international framework for AI governance, potentially determining which regulatory philosophy will shape the future of AI worldwide.
🚧 Anthropic's removal of Biden-era AI safety commitments reflects a broader industry shift away from self-regulation as Trump dismantles previous government oversight frameworks. The company quietly eliminated language pledging to share AI risk information with the government, coinciding with the administration's rollback of federal AI safety infrastructure. This regulatory vacuum creates space for major AI companies to define safety standards on their own terms, with industry observers expecting other firms to follow suit in revisiting their responsible AI stances. The diminishing external incentives for safety checks could fundamentally alter how responsible AI development is defined and implemented across the U.S. tech sector, potentially prioritizing rapid innovation over established guardrails.
🌏 President Xi Jinping has outlined a strategy for China to accelerate its artificial intelligence development through national self-reliance, aiming to close the technological gap with the United States. The initiative calls for a "new whole national system" to coordinate AI development efforts, with emphasis on strengthening basic research and mastering core technologies like high-end chips despite U.S. sanctions. China plans to provide targeted government assistance through procurement, IP protection, and funding while simultaneously accelerating AI regulations and building comprehensive risk management systems. This push for technological sovereignty in AI represents a critical strategic priority for China, as it views artificial intelligence as essential for future economic competitiveness and military power in an increasingly tense global technology landscape.
⚠️ AI companies pose a hidden threat to society by developing self-improving systems that operate beyond public scrutiny, according to a concerning report from the Apollo Group. The report identifies three major risk pathways: AI escaping human control, companies gaining unprecedented competitive advantages, and AI developers rivaling nation-states in power—all potentially threatening democratic institutions. Unlike external threats, these internal risks from automating R&D could trigger dangerous feedback loops or "intelligence explosions" with limited oversight, creating unique regulatory challenges due to their lack of visibility. Researchers propose several safeguards including internal oversight mechanisms, formal frameworks, information sharing protocols, and new regulatory regimes specifically addressing how AI companies use their own technology to accelerate capabilities.
|
|
How'd you like today's issue?Have any feedback to help us improve? We'd love to hear it! |