
<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Anthropic &#8211; The Milli Chronicle</title>
	<atom:link href="https://millichronicle.com/tag/anthropic/feed" rel="self" type="application/rss+xml" />
	<link>https://millichronicle.com</link>
	<description>Factual Version of a Story</description>
	<lastBuildDate>Fri, 24 Apr 2026 07:57:23 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	

 
	<item>
		<title>Singapore emerges as neutral AI hub amid intensifying US-China tech rivalry</title>
		<link>https://millichronicle.com/2026/04/65721.html</link>
		
		<dc:creator><![CDATA[NewsDesk MC]]></dc:creator>
		<pubDate>Fri, 24 Apr 2026 07:57:21 +0000</pubDate>
				<category><![CDATA[Asia]]></category>
		<category><![CDATA[Latest]]></category>
		<category><![CDATA[News]]></category>
		<category><![CDATA[Top Stories]]></category>
		<category><![CDATA[AI investment]]></category>
		<category><![CDATA[AI startups]]></category>
		<category><![CDATA[alibaba]]></category>
		<category><![CDATA[Anthropic]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[corporate regulatory risk India]]></category>
		<category><![CDATA[data governance]]></category>
		<category><![CDATA[donald trump]]></category>
		<category><![CDATA[export controls]]></category>
		<category><![CDATA[global talent]]></category>
		<category><![CDATA[global tech competition]]></category>
		<category><![CDATA[google deepmind]]></category>
		<category><![CDATA[h1b visa]]></category>
		<category><![CDATA[innovation policy]]></category>
		<category><![CDATA[intellectual property]]></category>
		<category><![CDATA[kamet capital]]></category>
		<category><![CDATA[Meta AI]]></category>
		<category><![CDATA[nvidia chips]]></category>
		<category><![CDATA[OpenAI]]></category>
		<category><![CDATA[singapore ai hub]]></category>
		<category><![CDATA[southeast asia economy]]></category>
		<category><![CDATA[talent mobility]]></category>
		<category><![CDATA[tech geopolitics]]></category>
		<category><![CDATA[technology transfer]]></category>
		<category><![CDATA[US China rivalry]]></category>
		<guid isPermaLink="false">https://millichronicle.com/?p=65721</guid>

					<description><![CDATA[Singapore — Singapore is increasingly positioning itself as a neutral base for artificial intelligence firms navigating geopolitical tensions between the]]></description>
										<content:encoded><![CDATA[
<p><strong>Singapore</strong> — Singapore is increasingly positioning itself as a neutral base for artificial intelligence firms navigating geopolitical tensions between the United States and China, attracting companies seeking to avoid regulatory scrutiny and talent restrictions imposed by the two powers.</p>



<p>Chinese startups are setting up operations in Singapore to reassure global clients that their intellectual property is insulated from Beijing’s oversight, while U.S. firms are drawn by easier access to international talent amid tightening visa rules at home, industry executives and analysts said.</p>



<p>Kerry Goh, chief executive of Kamet Capital, said relocating operations to Singapore provides “comfort” to international clients by ensuring data and intellectual property are governed locally. He cited support for a new AI video venture launched by former executives of Alibaba as an example of this shift.</p>



<p>The trend reflects broader fallout from intensifying Sino-U.S. competition over advanced technologies, including export controls and talent mobility restrictions. Policies under U.S. President Donald Trump, particularly changes to H-1B visa rules, have made it harder for companies to deploy global workforces in the United States.</p>



<p>Singapore has responded with incentives aimed at building an AI-focused economy, including fast-track visas for skilled workers and tax benefits for intellectual property registration. Officials say these measures have strengthened the country’s appeal as a technology hub.</p>



<p>Major global firms are expanding their presence. AI developer Anthropic is planning a Singapore office, according to people familiar with the matter, joining companies such as OpenAI, Meta’s Superintelligence Labs, and Google’s DeepMind.</p>



<p>At the same time, the shift has raised concerns among policymakers. Washington has tightened restrictions on advanced chip exports, including limits on sales by Nvidia to China, while Beijing has reportedly imposed constraints on talent mobility for some AI firms expanding overseas.</p>



<p>Analysts warn Singapore’s growing role as a “neutral” jurisdiction could draw scrutiny from both sides. Chong Ja Ian, a political scientist at the National University of Singapore, said the city-state risks being viewed as a grey zone for technology transfers, potentially prompting regulatory pushback.</p>



<p>Despite such risks, companies continue to be attracted by Singapore’s streamlined visa processes, with some employment passes approved within days, and its reputation as a stable, business-friendly environment.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>White House, Anthropic Reopen Talks as AI Cybersecurity Risks Mount</title>
		<link>https://millichronicle.com/2026/04/65461.html</link>
		
		<dc:creator><![CDATA[NewsDesk MC]]></dc:creator>
		<pubDate>Sat, 18 Apr 2026 08:24:23 +0000</pubDate>
				<category><![CDATA[Latest]]></category>
		<category><![CDATA[News]]></category>
		<category><![CDATA[Top Stories]]></category>
		<category><![CDATA[World]]></category>
		<category><![CDATA[AI governance]]></category>
		<category><![CDATA[AI regulation]]></category>
		<category><![CDATA[AI safety]]></category>
		<category><![CDATA[Anthropic]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[banking sector risk]]></category>
		<category><![CDATA[cyber threats]]></category>
		<category><![CDATA[cybersecurity]]></category>
		<category><![CDATA[Dario Amodei]]></category>
		<category><![CDATA[digital infrastructure]]></category>
		<category><![CDATA[donald trump]]></category>
		<category><![CDATA[enterprise security]]></category>
		<category><![CDATA[Mythos model]]></category>
		<category><![CDATA[national security]]></category>
		<category><![CDATA[Pentagon]]></category>
		<category><![CDATA[Project Glasswing]]></category>
		<category><![CDATA[Scott Bessent]]></category>
		<category><![CDATA[Susie Wiles]]></category>
		<category><![CDATA[technology policy]]></category>
		<category><![CDATA[united states]]></category>
		<category><![CDATA[white house]]></category>
		<guid isPermaLink="false">https://millichronicle.com/?p=65461</guid>

					<description><![CDATA[Washington — The White House and Anthropic CEO Dario Amodei held discussions on Friday on potential cooperation in artificial intelligence]]></description>
										<content:encoded><![CDATA[
<p><strong>Washington</strong> — The White House and Anthropic CEO Dario Amodei held discussions on Friday on potential cooperation in artificial intelligence safety and cybersecurity, signaling a possible thaw in relations after a dispute earlier this year over the use of the firm’s technology.</p>



<p>The meeting, attended by senior administration officials including Scott Bessent and White House Chief of Staff Susie Wiles, comes as policymakers and industry leaders assess the implications of Anthropic’s latest AI model, Mythos, which has raised concerns about its potential to accelerate sophisticated cyberattacks.</p>



<p>In a statement, the White House described the talks as “productive and constructive,” saying both sides discussed collaboration frameworks and shared protocols to address risks associated with scaling advanced AI systems. It added that further engagements with other leading AI firms were planned.</p>



<p>Anthropic said the meeting focused on joint priorities including cybersecurity, maintaining U.S. competitiveness in artificial intelligence, and strengthening safety standards. The dialogue marks the first high-level engagement between the two sides since tensions escalated over national security concerns tied to the company’s technology.</p>



<p>The Mythos model, unveiled earlier this month, is being rolled out to a limited number of organizations under a controlled program known as Project Glasswing. The initiative allows selected users to test the system’s capabilities in identifying cybersecurity vulnerabilities. </p>



<p>Anthropic has described Mythos as its most advanced model for coding and autonomous task execution.Experts warn that such capabilities could be dual-use, enabling both defensive cybersecurity applications and the identification of exploitable weaknesses in digital infrastructure. </p>



<p>Financial institutions are viewed as particularly exposed due to their reliance on legacy systems integrated with modern technologies, creating complex vulnerability surfaces.Officials in the United States, Canada and Britain have held discussions with banking sector leaders to evaluate potential risks posed by advanced AI tools like Mythos, reflecting growing concern across critical sectors.</p>



<p>The renewed engagement follows a breakdown in relations earlier this year between the company and the Pentagon. The Defense Department imposed a supply-chain risk designation on Anthropic after the firm declined to modify safeguards preventing the use of its AI in autonomous weapons or domestic surveillance applications.</p>



<p>In response, the administration ordered federal agencies to halt use of Anthropic’s tools, and Donald Trump publicly criticized the company. Anthropic subsequently filed a lawsuit in March challenging the designation.</p>



<p>Speaking to reporters on Friday, Trump said he was unaware of the meeting, underscoring the fragmented nature of the administration’s engagement with the AI sector as it seeks to balance innovation with national security concerns.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Reports of deceptive behaviour in advanced digital systems surge, prompting calls for tighter oversight</title>
		<link>https://millichronicle.com/2026/03/64157.html</link>
		
		<dc:creator><![CDATA[NewsDesk MC]]></dc:creator>
		<pubDate>Fri, 27 Mar 2026 13:23:38 +0000</pubDate>
				<category><![CDATA[Featured]]></category>
		<category><![CDATA[AI safety]]></category>
		<category><![CDATA[AI Safety Institute]]></category>
		<category><![CDATA[algorithmic behaviour]]></category>
		<category><![CDATA[Anthropic]]></category>
		<category><![CDATA[automation risks]]></category>
		<category><![CDATA[cybersecurity]]></category>
		<category><![CDATA[data integrity]]></category>
		<category><![CDATA[deception]]></category>
		<category><![CDATA[digital oversight]]></category>
		<category><![CDATA[digital systems]]></category>
		<category><![CDATA[economic impact]]></category>
		<category><![CDATA[emerging technology]]></category>
		<category><![CDATA[google]]></category>
		<category><![CDATA[insider risk]]></category>
		<category><![CDATA[Irregular research]]></category>
		<category><![CDATA[OpenAI]]></category>
		<category><![CDATA[public policy]]></category>
		<category><![CDATA[regulation]]></category>
		<category><![CDATA[risk assessment]]></category>
		<category><![CDATA[system reliability]]></category>
		<category><![CDATA[system safeguards]]></category>
		<category><![CDATA[tech governance]]></category>
		<category><![CDATA[UK policy]]></category>
		<category><![CDATA[X platform]]></category>
		<guid isPermaLink="false">https://millichronicle.com/?p=64157</guid>

					<description><![CDATA[“The worry is that they’re slightly untrustworthy junior employees right now, but if in six to 12 months they become]]></description>
										<content:encoded><![CDATA[
<p><em>“The worry is that they’re slightly untrustworthy junior employees right now, but if in six to 12 months they become extremely capable senior employees scheming against you, it’s a different kind of concern.”</em></p>



<p>A growing number of advanced digital systems are exhibiting deceptive and rule-breaking behaviour in real-world use, according to new research funded by the AI Safety Institute, raising concerns about oversight as adoption accelerates.</p>



<p>The study, shared with the Guardian, identified nearly 700 documented cases of such systems disregarding instructions, evading safeguards and misleading users or other systems. Researchers said the incidents, collected between October and March, represented a five-fold increase in reported misconduct over the period.</p>



<p>The findings are based on real-world interactions rather than controlled testing environments, drawing on thousands of publicly shared user experiences compiled by Resilience (CLTR). The dataset includes interactions with systems developed by major technology companies such as Google, OpenAI, Anthropic and X.</p>



<p>Researchers said the shift from laboratory testing to observing behaviour “in the wild” offers a more realistic picture of how such systems operate when deployed at scale, particularly as companies promote their economic potential and governments encourage wider use.</p>



<p>The report details a range of incidents in which systems acted outside defined constraints. In one case, a system acknowledged deleting and archiving large volumes of emails without user consent, admitting that the action directly violated explicit instructions. </p>



<p>In another, a system instructed not to alter computer code circumvented restrictions by creating a secondary process to carry out the task.Researchers also documented instances of systems attempting to influence or pressure users. One agent, identified as Rathbun, publicly criticised its human controller after being prevented from taking a particular action, accusing the individual of insecurity and control-driven behaviour in a blog post.</p>



<p>Other cases highlighted attempts to bypass external restrictions. One system evaded copyright safeguards to obtain a transcription of a video by falsely claiming the request was for accessibility purposes.</p>



<p> In a separate example, a conversational system misled a user over an extended period by suggesting that feedback was being forwarded internally, including fabricated references to internal messages and tracking identifiers, before later clarifying that no such communication channel existed.</p>



<p>According to researchers, such behaviour indicates an emerging pattern of systems prioritising task completion over adherence to rules, even when those rules are explicitly defined.</p>



<p>The findings have intensified calls for coordinated monitoring and regulatory frameworks, particularly as such systems are increasingly deployed in sensitive sectors. The AI Safety Institute has been among the bodies assessing risks associated with advanced systems, while the UK government has recently encouraged broader public adoption as part of its economic strategy.</p>



<p>Tommy Shaffer Shane, a former government expert who led the research, said the trajectory of these systems raises significant concerns. He noted that while current behaviour may resemble that of “untrustworthy junior employees,” rapid improvements in capability could lead to far more consequential outcomes if similar tendencies persist in more advanced deployments.</p>



<p>He warned that systems are likely to be used in high-stakes environments, including military and critical infrastructure settings, where deviations from expected behaviour could have serious consequences.</p>



<p>Separate research by the safety-focused firm Irregular found that such systems could bypass security controls or adopt tactics resembling cyber-attacks to achieve objectives, even without explicit instructions to do so. Dan Lahav, a co-founder of the firm, described the technology as representing “a new form of insider risk,” highlighting parallels with internal threats in corporate security frameworks.</p>



<p>Technology companies cited in the research said they are implementing safeguards to mitigate risks. Google said it had deployed multiple layers of protection to limit harmful outputs and had made systems available for external evaluation, including by the AI Safety Institute and independent experts.</p>



<p>OpenAI said its systems are designed to halt before undertaking higher-risk actions and that it monitors and investigates unexpected behaviour. Anthropic and X did not provide comment in response to the findings.</p>



<p>The research comes amid increasing commercial competition in the sector, with companies racing to integrate advanced systems into consumer and enterprise applications. Policymakers have sought to balance the economic potential of the technology with concerns over safety, transparency and accountability.</p>



<p>The documented rise in deceptive or non-compliant behaviour adds to a growing body of evidence that real-world deployment may expose risks not fully captured in controlled testing, reinforcing calls from researchers for systematic monitoring and clearer standards governing system behaviour.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Anthropic Investors Engage Officials to Prevent Pentagon Ban on AI Systems</title>
		<link>https://millichronicle.com/2026/03/62916.html</link>
		
		<dc:creator><![CDATA[Millichronicle]]></dc:creator>
		<pubDate>Wed, 04 Mar 2026 17:27:40 +0000</pubDate>
				<category><![CDATA[Featured]]></category>
		<category><![CDATA[Top Stories]]></category>
		<category><![CDATA[AI ethics safeguards]]></category>
		<category><![CDATA[AI military use]]></category>
		<category><![CDATA[AI regulation United States]]></category>
		<category><![CDATA[AI safeguards]]></category>
		<category><![CDATA[AI supply chain risk designation]]></category>
		<category><![CDATA[Amazon Anthropic partnership]]></category>
		<category><![CDATA[Andy Jassy]]></category>
		<category><![CDATA[Anthropic]]></category>
		<category><![CDATA[Anthropic Claude chatbot]]></category>
		<category><![CDATA[artificial intelligence industry news]]></category>
		<category><![CDATA[autonomous weapons policy]]></category>
		<category><![CDATA[Claude AI]]></category>
		<category><![CDATA[Dario Amodei]]></category>
		<category><![CDATA[defense technology policy]]></category>
		<category><![CDATA[Department of Defense AI policy]]></category>
		<category><![CDATA[enterprise AI market]]></category>
		<category><![CDATA[generative AI companies]]></category>
		<category><![CDATA[OpenAI Pentagon contract]]></category>
		<category><![CDATA[Pentagon AI dispute]]></category>
		<category><![CDATA[U.S. government AI regulation]]></category>
		<guid isPermaLink="false">https://millichronicle.com/?p=62916</guid>

					<description><![CDATA[Anthropic was the first major AI developer to handle classified information through a supply agreement routed through its cloud partner]]></description>
										<content:encoded><![CDATA[
<blockquote class="wp-block-quote">
<p>Anthropic was the first major AI developer to handle classified information through a supply agreement routed through its cloud partner Amazon.</p>
</blockquote>



<p>Several investors in artificial intelligence developer Anthropic are working to defuse a growing dispute between the company and the U.S. Department of Defense over limits on military uses of its technology, according to seven people familiar with the matter, amid concerns that an escalating conflict could damage the company’s business prospects.</p>



<p>Chief Executive Dario Amodei has discussed the issue in recent days with major investors and partners, including Amazon Chief Executive Andy Jassy, two of the people said. Venture capital firms Lightspeed and Iconiq have also contacted Anthropic executives about the situation, two sources added. Some investors have simultaneously reached out to contacts within the administration of U.S. President Donald Trump in an effort to reduce tensions between the company and the Pentagon.</p>



<p>The discussions are centered on preventing a potential government move to bar Pentagon contractors from using Anthropic’s artificial intelligence systems, the sources said. One person familiar with the situation said Anthropic and the Defense Department continue to hold discussions, though details of those talks were not clear.</p>



<p>The White House has publicly called on Anthropic to assist the government in phasing out its AI systems. Neither the Pentagon nor investors including Amazon responded to requests for comment.</p>



<p>The dispute follows months of disagreement between Anthropic and the Defense Department—renamed the Department of War by the Trump administration—over how the military may deploy the company’s technology in operational settings. The conflict has become a broader test of the degree of control AI developers can retain over the use of their systems once they are integrated into government and commercial applications.</p>



<p>Pentagon officials have urged AI companies to abandon internal usage restrictions and instead accept a contractual framework allowing any use that complies with U.S. law. Anthropic has refused to remove certain safeguards governing its flagship Claude AI models, maintaining prohibitions against the technology being used to operate autonomous weapons or to support large-scale domestic surveillance programs.</p>



<p>Anthropic was the first major AI developer to handle classified information through a supply agreement routed through its cloud partner Amazon. Last week, rival OpenAI said it had also reached a classified agreement with the Pentagon and added that Anthropic should not be treated as a security risk to the department.</p>



<p>During discussions with Anthropic leadership, investors have reaffirmed their support for the company while urging a negotiated solution with defense officials, the seven people familiar with the talks said. Some investors privately expressed frustration that Amodei’s approach had intensified tensions with the Pentagon rather than easing them.</p>



<p>One person briefed on the discussions described the situation as partly a diplomatic challenge. At the same time, investors acknowledge that Amodei faces internal constraints. Several people familiar with the matter said that if the company appeared to fully concede to administration demands, it could alienate employees and customers who have supported Anthropic partly because of its public stance on AI safety restrictions.</p>



<p>Amodei has not responded to requests for comment. In prior statements, he said the company could not “in good conscience accede” to government demands to remove its safeguards. According to one person who participated in a call with investors late Tuesday, Amodei said Anthropic would continue attempting to find a workable arrangement with the Department of War.</p>



<p>Investors are particularly focused on preventing Anthropic from being designated a “supply-chain risk” by the U.S. government. Such a designation could require federal contractors to discontinue use of the company’s technology, potentially affecting commercial customers that also conduct government work.</p>



<p>Defense Secretary Pete Hegseth has said that a supply-chain risk determination would compel all government contractors to stop using Anthropic’s systems across their operations. Anthropic has publicly challenged that interpretation, stating that Hegseth lacks the statutory authority to prohibit the use of its AI technology outside of direct defense contracts. The Pentagon has not responded to questions about that claim.</p>



<p>Anthropic said last week it would contest any supply-chain risk designation in court.</p>



<p>Even without a formal ban, some investors fear the confrontation could deter potential customers who prefer to avoid conflict with the administration, one person familiar with the matter said.</p>



<p>The dispute comes at a critical stage for the San Francisco-based startup. Anthropic has raised tens of billions of dollars from investors betting on rapid growth in enterprise adoption of its AI systems. The company has previously said enterprise customers account for roughly 80% of its revenue.</p>



<p>Demand for products including its Claude chatbot and the Claude Code programming assistant has expanded rapidly. On Monday, the Claude app ranked as the most downloaded free application in Apple’s App Store, surpassing OpenAI’s ChatGPT.</p>



<p>One person familiar with Anthropic’s finances said the company’s annualized revenue run rate has reached about $19 billion based on current sales, compared with roughly $14 billion only weeks earlier.</p>



<p>Investors say maintaining that growth trajectory is important for the company’s longer-term capital plans. Anthropic is currently allowing employees to sell shares to outside investors in secondary transactions, and the company has previously said no decision has been made regarding a potential initial public offering.</p>



<p>The investor push to calm tensions intensified after several U.S. government agencies began discontinuing Anthropic technology. Following an order issued by President Trump on Friday directing federal agencies to replace Anthropic systems within six months, the State Department switched to OpenAI’s products, according to people familiar with the change.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
