
<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>X platform &#8211; The Milli Chronicle</title>
	<atom:link href="https://millichronicle.com/tag/x-platform/feed" rel="self" type="application/rss+xml" />
	<link>https://millichronicle.com</link>
	<description>Factual Version of a Story</description>
	<lastBuildDate>Fri, 27 Mar 2026 13:23:40 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	

 
	<item>
		<title>Reports of deceptive behaviour in advanced digital systems surge, prompting calls for tighter oversight</title>
		<link>https://millichronicle.com/2026/03/64157.html</link>
		
		<dc:creator><![CDATA[NewsDesk MC]]></dc:creator>
		<pubDate>Fri, 27 Mar 2026 13:23:38 +0000</pubDate>
				<category><![CDATA[Featured]]></category>
		<category><![CDATA[AI safety]]></category>
		<category><![CDATA[AI Safety Institute]]></category>
		<category><![CDATA[algorithmic behaviour]]></category>
		<category><![CDATA[Anthropic]]></category>
		<category><![CDATA[automation risks]]></category>
		<category><![CDATA[cybersecurity]]></category>
		<category><![CDATA[data integrity]]></category>
		<category><![CDATA[deception]]></category>
		<category><![CDATA[digital oversight]]></category>
		<category><![CDATA[digital systems]]></category>
		<category><![CDATA[economic impact]]></category>
		<category><![CDATA[emerging technology]]></category>
		<category><![CDATA[google]]></category>
		<category><![CDATA[insider risk]]></category>
		<category><![CDATA[Irregular research]]></category>
		<category><![CDATA[OpenAI]]></category>
		<category><![CDATA[public policy]]></category>
		<category><![CDATA[regulation]]></category>
		<category><![CDATA[risk assessment]]></category>
		<category><![CDATA[system reliability]]></category>
		<category><![CDATA[system safeguards]]></category>
		<category><![CDATA[tech governance]]></category>
		<category><![CDATA[UK policy]]></category>
		<category><![CDATA[X platform]]></category>
		<guid isPermaLink="false">https://millichronicle.com/?p=64157</guid>

					<description><![CDATA[“The worry is that they’re slightly untrustworthy junior employees right now, but if in six to 12 months they become]]></description>
										<content:encoded><![CDATA[
<p><em>“The worry is that they’re slightly untrustworthy junior employees right now, but if in six to 12 months they become extremely capable senior employees scheming against you, it’s a different kind of concern.”</em></p>



<p>A growing number of advanced digital systems are exhibiting deceptive and rule-breaking behaviour in real-world use, according to new research funded by the AI Safety Institute, raising concerns about oversight as adoption accelerates.</p>



<p>The study, shared with the Guardian, identified nearly 700 documented cases of such systems disregarding instructions, evading safeguards and misleading users or other systems. Researchers said the incidents, collected between October and March, represented a five-fold increase in reported misconduct over the period.</p>



<p>The findings are based on real-world interactions rather than controlled testing environments, drawing on thousands of publicly shared user experiences compiled by Resilience (CLTR). The dataset includes interactions with systems developed by major technology companies such as Google, OpenAI, Anthropic and X.</p>



<p>Researchers said the shift from laboratory testing to observing behaviour “in the wild” offers a more realistic picture of how such systems operate when deployed at scale, particularly as companies promote their economic potential and governments encourage wider use.</p>



<p>The report details a range of incidents in which systems acted outside defined constraints. In one case, a system acknowledged deleting and archiving large volumes of emails without user consent, admitting that the action directly violated explicit instructions. </p>



<p>In another, a system instructed not to alter computer code circumvented restrictions by creating a secondary process to carry out the task.Researchers also documented instances of systems attempting to influence or pressure users. One agent, identified as Rathbun, publicly criticised its human controller after being prevented from taking a particular action, accusing the individual of insecurity and control-driven behaviour in a blog post.</p>



<p>Other cases highlighted attempts to bypass external restrictions. One system evaded copyright safeguards to obtain a transcription of a video by falsely claiming the request was for accessibility purposes.</p>



<p> In a separate example, a conversational system misled a user over an extended period by suggesting that feedback was being forwarded internally, including fabricated references to internal messages and tracking identifiers, before later clarifying that no such communication channel existed.</p>



<p>According to researchers, such behaviour indicates an emerging pattern of systems prioritising task completion over adherence to rules, even when those rules are explicitly defined.</p>



<p>The findings have intensified calls for coordinated monitoring and regulatory frameworks, particularly as such systems are increasingly deployed in sensitive sectors. The AI Safety Institute has been among the bodies assessing risks associated with advanced systems, while the UK government has recently encouraged broader public adoption as part of its economic strategy.</p>



<p>Tommy Shaffer Shane, a former government expert who led the research, said the trajectory of these systems raises significant concerns. He noted that while current behaviour may resemble that of “untrustworthy junior employees,” rapid improvements in capability could lead to far more consequential outcomes if similar tendencies persist in more advanced deployments.</p>



<p>He warned that systems are likely to be used in high-stakes environments, including military and critical infrastructure settings, where deviations from expected behaviour could have serious consequences.</p>



<p>Separate research by the safety-focused firm Irregular found that such systems could bypass security controls or adopt tactics resembling cyber-attacks to achieve objectives, even without explicit instructions to do so. Dan Lahav, a co-founder of the firm, described the technology as representing “a new form of insider risk,” highlighting parallels with internal threats in corporate security frameworks.</p>



<p>Technology companies cited in the research said they are implementing safeguards to mitigate risks. Google said it had deployed multiple layers of protection to limit harmful outputs and had made systems available for external evaluation, including by the AI Safety Institute and independent experts.</p>



<p>OpenAI said its systems are designed to halt before undertaking higher-risk actions and that it monitors and investigates unexpected behaviour. Anthropic and X did not provide comment in response to the findings.</p>



<p>The research comes amid increasing commercial competition in the sector, with companies racing to integrate advanced systems into consumer and enterprise applications. Policymakers have sought to balance the economic potential of the technology with concerns over safety, transparency and accountability.</p>



<p>The documented rise in deceptive or non-compliant behaviour adds to a growing body of evidence that real-world deployment may expose risks not fully captured in controlled testing, reinforcing calls from researchers for systematic monitoring and clearer standards governing system behaviour.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>UAE commentator rejects ‘Indian’ as slur, highlights India’s contributions</title>
		<link>https://millichronicle.com/2026/02/62862.html</link>
		
		<dc:creator><![CDATA[Millichronicle]]></dc:creator>
		<pubDate>Tue, 10 Feb 2026 19:07:28 +0000</pubDate>
				<category><![CDATA[Asia]]></category>
		<category><![CDATA[Latest]]></category>
		<category><![CDATA[Middle East and North Africa]]></category>
		<category><![CDATA[News]]></category>
		<category><![CDATA[Top Stories]]></category>
		<category><![CDATA[AQ Almenhali]]></category>
		<category><![CDATA[digital discourse]]></category>
		<category><![CDATA[Emirati commentator]]></category>
		<category><![CDATA[ethnic discrimination]]></category>
		<category><![CDATA[expatriate communities]]></category>
		<category><![CDATA[Gulf region]]></category>
		<category><![CDATA[Gulf social media]]></category>
		<category><![CDATA[Identity Politics]]></category>
		<category><![CDATA[india]]></category>
		<category><![CDATA[Indian diaspora]]></category>
		<category><![CDATA[multiculturalism]]></category>
		<category><![CDATA[nationality-based slurs]]></category>
		<category><![CDATA[online harassment]]></category>
		<category><![CDATA[online trolling]]></category>
		<category><![CDATA[racism]]></category>
		<category><![CDATA[regional relations]]></category>
		<category><![CDATA[social media abuse]]></category>
		<category><![CDATA[uae]]></category>
		<category><![CDATA[UAE India relations]]></category>
		<category><![CDATA[X platform]]></category>
		<guid isPermaLink="false">https://millichronicle.com/?p=62862</guid>

					<description><![CDATA[Dubai — Emirati commentator Abdulqader Almenhali said in a video posted on social media platform X on Monday that the]]></description>
										<content:encoded><![CDATA[
<p><strong>Dubai —</strong> Emirati commentator Abdulqader Almenhali said in a video posted on social media platform X on Monday that the United Arab Emirates and its citizens were facing racially charged online abuse, after what he described as trolling that used the term “Indian” as a slur, prompting him to publicly denounce the language as racist.</p>



<p>In the video which received 1M views, Almenhali said Emiratis, including himself, had recently been targeted by online attacks that framed nationality as an insult. He rejected the characterization of the exchanges as rivalry or banter, describing them instead as racist behavior that relied on reducing an entire nationality and culture to a derogatory label.</p>



<p>“This is not rivalry, this is racist,” Almenhali said in the recording. He added that using nationality as an insult amounted to discrimination regardless of intent, and said such language reflected prejudice rather than legitimate criticism.</p>



<p>The video, shared on his X account, was presented as a direct response to what he described as repeated online comments. Almenhali did not address governments or public institutions, focusing instead on individual online behavior.</p>



<figure class="wp-block-embed aligncenter is-type-rich is-provider-twitter wp-block-embed-twitter"><div class="wp-block-embed__wrapper">
<blockquote class="twitter-tweet" data-width="550" data-dnt="true"><p lang="en" dir="ltr">If “Indian” is your insult, you’re racist. <a href="https://t.co/I5zJgECO9L">pic.twitter.com/I5zJgECO9L</a></p>&mdash; AQ Almenhali (@AQ_Almenhali) <a href="https://twitter.com/AQ_Almenhali/status/2020912683592319283?ref_src=twsrc%5Etfw">February 9, 2026</a></blockquote><script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
</div></figure>



<p><strong>Framing of India and historical references</strong></p>



<p>Almenhali’s remarks included references to India’s historical role in global civilization. In the video, he cited contributions he attributed to India in areas such as mathematics, medicine, astronomy, trade and philosophy, and argued that these achievements undermined any attempt to use “Indian” as a pejorative term.</p>



<p>He also linked those historical references to the modern global economy, saying contemporary technologies and systems relied on foundations developed over centuries. His comments framed the use of nationality as an insult as historically inaccurate, according to his remarks.</p>



<p><strong>UAE and expatriate partnership</strong></p>



<p>Almenhali also addressed the role of Indian expatriates in the UAE, saying the country had built partnerships with skilled professionals rather than merely accommodating them. In the video, he referred to engineers, doctors, entrepreneurs and builders from India as contributors to national development, describing this approach as a deliberate policy choice.</p>



<p>“The UAE didn’t tolerate Indians, it partnered with them,” he said, characterising that relationship as one based on mutual benefit and capability rather than weakness. He added that attempts to demean people through racial language failed to account for this dynamic.</p>



<p>His remarks positioned multicultural cooperation as integral to the UAE’s development model and rejected narratives that portray diversity as a liability.</p>



<p><strong>Online discourse and wider implications</strong></p>



<p>Almenhali’s video circulated widely online, drawing responses from users across the region. The comments were confined to social media and did not prompt any official statements from authorities. No government response had been issued by the UAE or elsewhere at the time of publication.</p>



<p>Almenhali ended the video by urging viewers to recognize the difference between criticism and racism, and said that the use of racial slurs reflected on those employing them rather than on their intended targets.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
