<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>ChatGPT &#8211; OSLogs</title>
	<atom:link href="https://oslogs.com/tag/chatgpt/feed/" rel="self" type="application/rss+xml" />
	<link>https://oslogs.com</link>
	<description>Logging Operating System Updates</description>
	<lastBuildDate>Wed, 04 Mar 2026 13:25:47 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.1</generator>

 
	<item>
		<title>The ethical AI war &#8211; Claude, ChatGPT and Pentagon</title>
		<link>https://oslogs.com/2026/03/04/the-ethical-ai-war-claude-chatgpt-and-pentagon/</link>
					<comments>https://oslogs.com/2026/03/04/the-ethical-ai-war-claude-chatgpt-and-pentagon/#respond</comments>
		
		<dc:creator><![CDATA[Nishant Kaushal]]></dc:creator>
		<pubDate>Wed, 04 Mar 2026 13:25:39 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[Anthropic]]></category>
		<category><![CDATA[ChatGPT]]></category>
		<category><![CDATA[Claude]]></category>
		<category><![CDATA[OpenAI]]></category>
		<guid isPermaLink="false">https://oslogs.com/?p=8556</guid>

					<description><![CDATA[The year 2026 has been defined by a collision between two of the most powerful forces in the modern world: the rapidly evolving intelligence of frontier AI and the uncompromising demands of national defense. At the center of this storm is a bitter, public, and high-stakes divorce between Anthropic &#8211; the safety-focused darling of Silicon [&#8230;]]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div>
<p>The year 2026 has been defined by a collision between two of the most powerful forces in the modern world: the rapidly evolving intelligence of frontier AI and the uncompromising demands of national defense. At the center of this storm is a bitter, public, and high-stakes divorce between <strong>Anthropic</strong> &#8211; the safety-focused darling of Silicon Valley &#8211; and the <strong>U.S. Department of Defense</strong></p>



<p>What began as a pioneering partnership to put AI in the &#8220;kill chain&#8221; ended in a 5:01 p.m. ultimatum, a presidential ban, and a massive shift in the public’s loyalty. This isn&#8217;t just a corporate spat; it&#8217;s a foundational debate about who holds the &#8220;kill switch&#8221; for the most powerful technology in human history.</p>



<h2 class="wp-block-heading">The Origins of the Rift: A Partnership Built on Shaky Ground</h2>



<p>The relationship between Anthropic and the Pentagon didn&#8217;t start with hostility. In 2024, Anthropic&#8217;s Claude model became the first large language model (LLM) cleared to operate on the military&#8217;s most sensitive, classified networks. Unlike its competitors, Anthropic&#8217;s &#8220;Constitutional AI&#8221; approach—where the model is trained to follow a specific set of ethical principles—was seen as a feature, not a bug.</p>



<p>In July 2025, the Pentagon awarded Anthropic a $200 million contract to prototype &#8220;agentic AI&#8221; for national security. At the time, Anthropic CEO Dario Amodei stated the company would support &#8220;responsible AI in defense operations&#8221;. However, the fine print contained two non-negotiable &#8220;red lines&#8221;:</p>



<ul class="wp-block-list">
<li><strong>No mass domestic surveillance of American citizens.</strong></li>



<li><strong>No fully autonomous weapons systems</strong> (lethal AI that can decide to kill without a human in the loop).</li>
</ul>



<p>For a while, the arrangement worked. <strong>Claude was integrated through Palantir and used for intelligence analysis and operational planning</strong>. But in January 2026, a U.S. special operations raid in Venezuela that led to the capture of President Nicolás Maduro changed everything. Reports surfaced that Claude had been used to help plan the raid. When an Anthropic executive reportedly asked Palantir if their AI had been used in the kinetic operation, the Pentagon interpreted the inquiry as a sign that a private company was trying to &#8220;audit&#8221; or &#8220;veto&#8221; active military missions.</p>



<h2 class="wp-block-heading">The Ultimatum: What the Pentagon Demanded</h2>



<p>Following the Venezuela operation, the Department of War (DoW), led by Secretary Pete Hegseth, decided that &#8220;ideological guardrails&#8221; were a liability. On February 24, 2026, Hegseth delivered a formal demand: Anthropic must remove all usage restrictions and grant the military access to Claude for &#8220;all lawful purposes&#8221; without exception.</p>



<h3 class="wp-block-heading">Is the Pentagon&#8217;s Demand Fair?</h3>



<p>The Pentagon&#8217;s argument rests on the principle of civilian (and democratic) control. As Hegseth put it in a post on X, &#8220;The @DeptofWar will ALWAYS adhere to the law but not bend to the whims of any one for-profit tech company.&#8221;</p>



<figure class="wp-block-embed is-type-rich is-provider-twitter wp-block-embed-twitter"><div class="wp-block-embed__wrapper">
<blockquote class="twitter-tweet" data-width="550" data-dnt="true"><p lang="en" dir="ltr">It’s a shame that <a href="https://twitter.com/DarioAmodei?ref_src=twsrc%5Etfw" target="_blank" rel="noopener">@DarioAmodei</a> is a liar and has a God-complex. He wants nothing more than to try to personally control the US Military and is ok putting our nation’s safety at risk.  <br><br>The <a href="https://twitter.com/DeptofWar?ref_src=twsrc%5Etfw" target="_blank" rel="noopener">@DeptofWar</a> will ALWAYS adhere to the law but not bend to whims of any one for-profit tech… <a href="https://t.co/ZfwXG36Wvl">https://t.co/ZfwXG36Wvl</a></p>&mdash; Under Secretary of War Emil Michael (@USWREMichael) <a href="https://twitter.com/USWREMichael/status/2027211708201058578?ref_src=twsrc%5Etfw" target="_blank" rel="noopener">February 27, 2026</a></blockquote><script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
</div></figure>



<ul class="wp-block-list">
<li><strong>The &#8220;Lethality&#8221; Argument</strong>: The military argues that in a conflict with an adversary like China, milliseconds matter. If an AI detects an incoming drone swarm, it shouldn&#8217;t have to pause to &#8220;check its constitution&#8221; before authorizing a defensive strike.</li>



<li><strong>The &#8220;Law vs. Ethics&#8221; Argument</strong>: The Pentagon contends that if an action is legal under U.S. law and approved by the Commander-in-Chief, a tech CEO has no right to block it. From their perspective, Anthropic&#8217;s stance is a &#8220;master class in arrogance and betrayal&#8221;.</li>
</ul>



<p>However, critics argue that &#8220;lawful purposes&#8221; is a moving target. Laws can be reinterpreted in secret (as seen with the Patriot Act), and the Pentagon&#8217;s demand to use AI for mass surveillance of unclassified commercial data on Americans feels like a bridge too far for many civil libertarians.</p>



<h2 class="wp-block-heading">Anthropic’s Stand: A Question of Conscience</h2>



<p>Anthropic&#8217;s response was a flat &#8220;no&#8221;. On February 26, 2026, <a href="https://www.anthropic.com/news/statement-department-of-war" target="_blank" rel="noopener">Dario Amodei released a statement</a> explaining that the company &#8220;cannot in good conscience accede to their request&#8221;.</p>



<h3 class="wp-block-heading">Is Anthropic&#8217;s Stand Fair?</h3>



<p>Anthropic&#8217;s point of view is rooted in technical reality rather than just moral grandstanding. Amodei argued that:</p>



<ul class="wp-block-list">
<li><strong>AI is Unreliable</strong>: &#8220;Frontier AI systems are simply not reliable enough to power fully autonomous weapons&#8221;. In short, AI still &#8220;hallucinates&#8221;, and a hallucination in a lethal weapons system is a war crime waiting to happen.</li>



<li><strong>The Risk of Mass Surveillance</strong>: Anthropic believes that AI-driven surveillance presents &#8220;serious, novel risks to our fundamental liberties&#8221; that current laws aren&#8217;t equipped to handle.</li>
</ul>



<p>Is it fair for a company to refuse a $200 million contract? Certainly. Is it fair for them to hold &#8220;veto power&#8221; over the military? That is the billion-dollar question. Anthropic argues they aren&#8217;t vetoing the military; they are simply choosing not to be the ones who build the &#8220;Big Brother&#8221; machine.</p>



<h2 class="wp-block-heading">The Fallout: Who Benefited?</h2>



<p>When the 5:01 p.m. deadline on February 27 passed, the retaliatory strikes from the government were swift. President Trump ordered all federal agencies to cease using Anthropic’s technology and labeled the company a &#8220;supply chain risk&#8221;.</p>



<p>The &#8220;supply chain risk&#8221; designation, announced on February 27, 2026, by Secretary Pete Hegseth, represents the first time such a national security sanction &#8211; typically reserved for foreign adversaries like Huawei &#8211; has been turned against a major American technology firm.</p>



<figure class="wp-block-embed is-type-rich is-provider-twitter wp-block-embed-twitter"><div class="wp-block-embed__wrapper">
<blockquote class="twitter-tweet" data-width="550" data-dnt="true"><p lang="en" dir="ltr">This week, Anthropic delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon.<br><br>Our position has never wavered and will never waver: the Department of War must have full, unrestricted…</p>&mdash; Secretary of War Pete Hegseth (@SecWar) <a href="https://twitter.com/SecWar/status/2027507717469049070?ref_src=twsrc%5Etfw" target="_blank" rel="noopener">February 27, 2026</a></blockquote><script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
</div></figure>



<h3 class="wp-block-heading">How it Helped OpenAI</h3>



<p>Within hours of Anthropic being blacklisted, OpenAI stepped into the void. CEO Sam Altman announced a deal with the Pentagon to deploy GPT models on classified networks. OpenAI agreed to the &#8220;all lawful use&#8221; language, though Altman claimed they still shared Anthropic’s general &#8220;red lines&#8221;.</p>



<p>OpenAI&#8217;s pivot was a masterstroke of pragmatism. By saying &#8220;yes&#8221; when Anthropic said &#8220;no&#8221;, OpenAI secured its position as the primary AI partner for the U.S. government, ensuring billions in future revenue and deep integration into the state&#8217;s infrastructure. Altman described it as a move to &#8220;de-escalate&#8221; the tension between the tech industry and the government.</p>



<h3 class="wp-block-heading">How it Helped Anthropic</h3>



<p>While Anthropic lost the contract and faces a &#8220;supply chain risk&#8221; designation, it won the PR war. By being &#8220;banned&#8221; by the government for refusing to build &#8220;killer robots&#8221; and &#8220;spy tools&#8221;, Anthropic&#8217;s brand as the &#8220;ethical AI&#8221; was solidified in the public consciousness.</p>



<p>Anthropic&#8217;s stand makes it arguably &#8220;more ethical&#8221; in the eyes of those who prioritize individual rights and safety over national power. OpenAI, conversely, argues that its stance is more democratically aligned because it defers to the laws of the land rather than the personal ethics of its board.</p>



<h2 class="wp-block-heading">The Great Exodus: How Claude Became the People&#8217;s Choice</h2>



<p>The public reaction to the dispute was nothing short of a cultural phenomenon. In the days following the ban, the hashtag <strong><a href="https://x.com/hashtag/QuitGPT?src=hashtag_click">#quitGPT</a></strong> began trending. Users, fearing that OpenAI was becoming a &#8220;wing of the military&#8221;, started deleting their accounts in droves.</p>



<h3 class="wp-block-heading">The Surge of Claude</h3>



<p>According to market data from Sensor Tower, Claude overtook ChatGPT as the <strong>#1 free app on the U.S. App Store</strong> for the first time on March 2, 2026. Anthropic leaned into this, releasing a &#8220;migration tool&#8221; that allowed users to import their entire ChatGPT chat history into Claude in under a minute.</p>



<p><strong>Why did this happen?</strong></p>



<ul class="wp-block-list">
<li><strong>The &#8220;Underdog&#8221; Effect</strong>: Anthropic became the &#8220;David&#8221; fighting the &#8220;Goliath&#8221; of the Pentagon and the White House.</li>



<li><strong>The Trust Gap</strong>: As OpenAI became more secretive and government-aligned, Claude&#8217;s &#8220;Constitutional&#8221; framework felt like a transparent promise to the user.</li>



<li><strong>Performance</strong>: It didn&#8217;t hurt that Claude 4.5 (released earlier that year) was already being hailed as more &#8220;human&#8221; and less prone to the &#8220;robotic&#8221; responses of GPT-5.</li>
</ul>



<p>As of March 4, 2026, Anthropic&#8217;s revenue has ironically surged to a $20 billion run rate, largely driven by a &#8220;backlash&#8221; of public support and enterprise users who value their safety-first stance. However, the legal threat remains existential for their partnership with cloud providers like AWS.</p>



<p></p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe title="Anthropic Designated Supply Chain Risk, Loses US Work in AI Feud" width="1530" height="861" src="https://www.youtube.com/embed/Dtoco-7cV-o?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
</div></figure>



<p></p>



<p>The Anthropic &#8211; Pentagon dispute of 2026 has drawn a permanent line in the sand. On one side, we have OpenAI, the powerhouse that has chosen to be the engine of the state. On the other, we have Anthropic, which has sacrificed billions to maintain its role as the &#8220;conscientious objector&#8221; of the AI world.</p>



<p>As Secretary Hegseth noted, &#8220;Anthropic&#8217;s relationship with the U.S. Armed Forces has been permanently altered.&#8221; But so has the public&#8217;s relationship with AI. By refusing to let Claude become a weapon, Anthropic didn&#8217;t just lose a contract &#8211; it gained a movement.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://oslogs.com/2026/03/04/the-ethical-ai-war-claude-chatgpt-and-pentagon/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Are Indian youth becoming lab rats to train big AI models?</title>
		<link>https://oslogs.com/2025/11/05/are-indian-youth-becoming-lab-rats-to-train-big-ai-models/</link>
					<comments>https://oslogs.com/2025/11/05/are-indian-youth-becoming-lab-rats-to-train-big-ai-models/#comments</comments>
		
		<dc:creator><![CDATA[Nishant Kaushal]]></dc:creator>
		<pubDate>Wed, 05 Nov 2025 11:00:51 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI models]]></category>
		<category><![CDATA[ChatGPT]]></category>
		<category><![CDATA[Gemini AI]]></category>
		<category><![CDATA[Perplexity AI]]></category>
		<guid isPermaLink="false">https://oslogs.com/?p=7700</guid>

					<description><![CDATA[Overnight, the fanciest AI models that once sat behind paywalls are being handed out to millions of people in India for free. It feels like a digital coronation. This week, ChatGPT&#8217;s creator, OpenAI, announced its premium Go tier is now free for an entire year to all of India. It’s a generous gift, a digital [&#8230;]]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div>
<p>Overnight, the fanciest AI models that once sat behind paywalls are being handed out to millions of people in India for free.</p>



<p>It feels like a digital coronation. This week, <a href="https://help.openai.com/en/articles/12739021-chatgpt-go-promotion-india" target="_blank" rel="noopener">ChatGPT&#8217;s creator, OpenAI, announced its premium Go tier is now free for an entire year to all of India</a>. It’s a generous gift, a digital key to a powerful kingdom.</p>



<p>This <strong>gift</strong> doesn&#8217;t arrive in a vacuum. It lands just after <a href="https://www.perplexity.ai/help-center/en/articles/11842322-perplexity-pro-airtel-promo" target="_blank" rel="noopener">Perplexity AI offered its own Pro version for free through a partnership with Airtel</a>. And not to be outdone, <a href="https://blog.google/around-the-globe/google-asia/reliance-jio-india-partnership/" target="_blank" rel="noopener">Google has teamed up with Jio to offer its advanced AI Pro plan free</a> for 18 months, specifically targeting the 18-25 year old demographic first.</p>



<p>The giants of Silicon Valley are lining up at our digital doorstep, bearing gifts worth thousands of rupees. The message is clear: India, you are the chosen one.</p>



<p>But it does make you stop and ask, doesn&#8217;t it? Why us? Why all at once? And why… <strong>free?</strong></p>



<h2 class="wp-block-heading">The Generous Offer</h2>



<p>The official line is one of empowerment and market access. We are, after all, the world&#8217;s largest population and its fastest-growing digital market. We are a <strong>nation of 1.4 billion people</strong>, with a sea of young, ambitious, tech-savvy minds who are already adopting AI faster than almost anywhere else on Earth.</p>



<p>These companies say they want to democratize access. They want to empower the Indian student, the developer, the small business owner. They see a nation poised to build the future, and they are generously providing the tools to do so.</p>



<p>It’s a compelling story. It&#8217;s also, almost certainly, not the whole story.</p>



<p>Because when the most valuable companies in the world all decide to give away their most valuable products for free to the same 1.4 billion people at the same time, it’s not just generosity. It&#8217;s a strategy.</p>



<p>The old Silicon Valley adage was, If you&#8217;re not paying for the product, you are the product. Your data was being sold to advertisers.</p>



<p>This is something new. This isn&#8217;t just about our data. It&#8217;s about our <em>intellect</em>.</p>



<p>In this new arrangement, are we the product? Or are we the unpaid <em>workforce</em>?</p>



<h2 class="wp-block-heading">The Real Price: A Billion Trainers</h2>



<p><strong>Ask yourself:</strong> what does the company get when you test a new feature, upload a file for analysis, or rely on an AI for homework, code, or creative work? Beyond immediate usage metrics, every conversation is a training signal. User corrections, edge-case queries, slang, regional languages, and cultural references all help refine the models. Large-scale, unpaid human interaction is arguably the richest ingredient these firms need. The question then isn&#8217;t whether they value our input &#8211; of course they do &#8211; it&#8217;s whether we understand just how much of our free labor we are contributing in exchange for convenience.</p>



<p>Think about what an AI like ChatGPT or Gemini actually is. It&#8217;s not a static encyclopedia. It&#8217;s a learning system. And like any student, it learns through practice, conversation, and &#8211; most importantly &#8211; correction.</p>



<p>What does this <strong>student</strong> need to graduate from being a clever American assistant to a truly global intelligence? It needs to understand the world. And India is a classroom unlike any other.</p>



<p>We are not just a <strong>market</strong>. We are a <strong>dataset</strong>.</p>



<p>A dataset of 1.4 billion people who don&#8217;t just speak English. We speak Hinglish. We speak Thanglish, Kanglish, and Bonglish. We code-switch in the middle of a sentence, blending Hindi grammar with English vocabulary. We ask questions with a unique cultural context that a model trained on American Reddit forums could never understand.</p>



<h2 class="wp-block-heading">The Digital Treadmill</h2>



<p>They aren&#8217;t just giving us free access. They are giving it to the most active, most demanding, and most creative digital population on the planet. They are targeting the young, the developers, the <strong>knowledge workers</strong> who will push these tools to their absolute limits.</p>



<p>Is this empowerment, or is it the world&#8217;s largest, most sophisticated R&amp;D experiment?</p>



<p>Are we the valued customer at the grand opening? Or are we the lab rats, running through a digital maze while the scientists on the other side of the glass take notes?</p>



<p>The <strong>cheese</strong> is a free premium subscription. The <strong>maze</strong> is the infinite canvas of our daily work, our school projects, and our personal curiosities. And the <strong>notes</strong> are the terabytes of training data we provide, making their product smarter, more capable, and ultimately, more valuable.</p>



<p>This isn&#8217;t a secret. The search results for <strong>why India</strong> are full of corporate buzzwords that mean exactly this: we are the <strong>proving ground</strong>, the <strong>testing ground</strong> for <strong>diverse data</strong> and <strong>anomaly detection</strong>. They need us to make their AI work globally.</p>



<h2 class="wp-block-heading">The Question We Must Ask</h2>



<ul class="wp-block-list">
<li><strong>First</strong>, treat <strong>free</strong> as an invitation to look closer: who owns the model, where is data processed, and what rights does the service reserve over your inputs.</li>



<li><strong>Second</strong>, be deliberate about what you feed to these services &#8211; sensitive personal information, client data, and proprietary work belong in guarded vaults, not casual prompts.</li>



<li><strong>Third</strong>, push for transparency: if corporate playbooks rely on mass user participation to improve models, then companies should be required to disclose how user data is used, anonymized, and retained, and to offer real controls that are easy for ordinary people to use.</li>
</ul>



<p>As we all rush to claim our free year of AI-powered brilliance, we must do so with our eyes wide open. We are not just users. We are a resource. We are the trainers. We are the labor.</p>



<p>The gift has been given. The golden handshake is offered. The question we must now ask ourselves is not <strong>What can I do with this?</strong></p>



<p>The real question is: <strong>What are they doing with me?</strong></p>
]]></content:encoded>
					
					<wfw:commentRss>https://oslogs.com/2025/11/05/are-indian-youth-becoming-lab-rats-to-train-big-ai-models/feed/</wfw:commentRss>
			<slash:comments>4</slash:comments>
		
		
			</item>
		<item>
		<title>OpenAI: A Tale of Two Eras and a New Crossroads</title>
		<link>https://oslogs.com/2023/11/21/openai-a-tale-of-two-eras-and-a-new-crossroads/</link>
					<comments>https://oslogs.com/2023/11/21/openai-a-tale-of-two-eras-and-a-new-crossroads/#respond</comments>
		
		<dc:creator><![CDATA[Nishant Kaushal]]></dc:creator>
		<pubDate>Tue, 21 Nov 2023 13:47:29 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[ChatGPT]]></category>
		<category><![CDATA[OpenAI]]></category>
		<category><![CDATA[Sam Altman]]></category>
		<guid isPermaLink="false">https://oslogs.com/?p=2569</guid>

					<description><![CDATA[In the realm of artificial intelligence, OpenAI stands as a name synonymous with groundbreaking advancements and ambitious aspirations. Founded in 2015 with the noble goal of ensuring that artificial general intelligence (AGI) benefits all of humanity, OpenAI quickly emerged as a leading force in AI research and development. Its early successes, including the release of [&#8230;]]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div>
<p>In the realm of artificial intelligence, <a href="https://openai.com/" target="_blank" rel="noopener">OpenAI</a> stands as a name synonymous with groundbreaking advancements and ambitious aspirations. Founded in 2015 with the noble goal of ensuring that artificial general intelligence (AGI) benefits all of humanity, OpenAI quickly emerged as a leading force in AI research and development. Its early successes, including the release of the powerful language model GPT-3, garnered widespread attention and fueled hopes for a future transformed by AI.</p>



<p>At the helm of OpenAI was <a href="https://twitter.com/sama" target="_blank" rel="noopener">Sam Altman</a>, a Silicon Valley veteran with a visionary outlook on the potential of AI. Under his leadership, OpenAI pursued an open-source approach, sharing its research and code with the broader AI community. This strategy, coupled with OpenAI&#8217;s impressive breakthroughs, fostered a sense of collaboration and accelerated progress in the field.</p>



<p>However, OpenAI&#8217;s trajectory took a dramatic turn in 2019 when it pivoted to a for-profit model, securing $1 billion in funding from Microsoft. This shift marked a transition from a research-centric organization to a company with commercial interests. While the influx of capital undoubtedly fueled further innovation, it also raised concerns about OpenAI&#8217;s commitment to its original mission of ensuring safe and beneficial AI development.</p>



<p>Internal tensions began to simmer as OpenAI&#8217;s focus shifted towards product development and monetization. Researchers expressed concerns about the lack of transparency and the potential for AI products being used for harmful purposes. These concerns were exacerbated by OpenAI&#8217;s decision to grant Microsoft exclusive licensing deals for certain technologies, leading to accusations of undue corporate influence.</p>



<p>In the midst of these internal struggles, OpenAI launched <a href="https://chat.openai.com/" target="_blank" rel="noopener">ChatGPT</a>, a free-to-use AI chatbot that quickly gained popularity. ChatGPT&#8217;s ability to engage in seemingly human-like conversations sparked a wave of excitement and hype, with many users praising the chatbot&#8217;s creativity and wit. However, ChatGPT&#8217;s popularity also raised concerns about the potential for misuse, as some users discovered that the chatbot could be used to generate harmful or misleading content.</p>



<p>The launch of ChatGPT and the subsequent hype surrounding it further highlighted the challenges faced by OpenAI as it grapples with the balancing act between commercialization and ethical AI development. The organization&#8217;s ability to address these concerns and regain the trust of its researchers will be crucial in determining its future success.</p>



<p>The launch of ChatGPT was met with widespread enthusiasm, with many users praising the chatbot&#8217;s ability to engage in seemingly human-like conversations. Some even speculated that ChatGPT could revolutionize the way we communicate and interact with technology.</p>



<p>However, ChatGPT&#8217;s popularity also raised concerns about the potential for misuse. Some users discovered that the chatbot could be used to generate harmful or misleading content, while others expressed worries about the chatbot&#8217;s potential to spread misinformation or create fake news.</p>



<p>OpenAI has taken steps to address these concerns, implementing safety measures and filters to prevent ChatGPT from being used for harmful purposes. However, the debate over ChatGPT&#8217;s potential risks and benefits continues, highlighting the complexities of developing and deploying powerful AI technologies.</p>



<p>Meanwhile, the growing rift between OpenAI&#8217;s leadership and its researchers culminated in Sam Altman&#8217;s abrupt departure as CEO in 2023. The board cited a lack of candor in Altman&#8217;s communications as the reason for his termination, but the underlying issues stemmed from the organization&#8217;s shift towards commercialization and the perceived erosion of its core values.</p>



<p>In the aftermath of Altman&#8217;s exit, OpenAI has embarked on a period of introspection and restructuring, grappling with the challenges of balancing its commercial ambitions with its ethical responsibilities. The organization has appointed Mira Murati, its former chief technology officer, as interim CEO while it conducts a search for a permanent replacement.</p>



<p>In a surprising turn of events, Mira Murati, along with around 500 employees, has threatened to resign from OpenAI if the company fails to address their concerns about its direction and leadership. These employees, who represent a significant portion of OpenAI&#8217;s research workforce, have expressed dissatisfaction with the organization&#8217;s focus on commercialization and its perceived lack of transparency.</p>



<p>The potential mass exodus of researchers poses a significant threat to OpenAI&#8217;s future. Without the expertise and dedication of these individuals, the organization&#8217;s ability to continue its groundbreaking work in AI would be severely hampered. The situation highlights the delicate balance that OpenAI must strike between its commercial aspirations and its commitment to ethical AI development.</p>



<p>As OpenAI navigates this critical juncture, it faces a pivotal decision. Will it continue on its current path, risking the loss of its talented researchers and the erosion of its original mission? Or will it heed the concerns of its employees and chart a course that aligns with its founding principles? The future of OpenAI hangs in the balance, and the choices made today will determine whether the organization lives up to its promise of ensuring that artificial general intelligence benefits all of humanity.</p>



<h3 class="wp-block-heading">Timeline of Events at OpenAI over the past week</h3>



<p><strong>November 14, 2023:</strong></p>



<ul class="wp-block-list">
<li>Sam Altman announces his departure as CEO of OpenAI.</li>



<li>The board of directors appoints Mira Murati as interim CEO.</li>
</ul>



<p><strong>November 15, 2023:</strong></p>



<ul class="wp-block-list">
<li>Concerns about OpenAI&#8217;s direction and leadership begin to surface among employees.</li>



<li>A group of 500 researchers threaten to resign if the company does not address their concerns.</li>
</ul>



<p><strong>November 16, 2023:</strong></p>



<ul class="wp-block-list">
<li>OpenAI acknowledges the concerns of its employees and pledges to take action.</li>



<li>The company announces a series of measures to improve transparency and accountability.</li>
</ul>



<p><strong>November 17, 2023:</strong></p>



<ul class="wp-block-list">
<li>Sam Altman joins Microsoft as the head of a new advanced AI research team.</li>



<li>This move is seen as a potential signal of a renewed commitment to OpenAI&#8217;s original mission.</li>
</ul>



<p><strong>November 18, 2023:</strong></p>



<ul class="wp-block-list">
<li>OpenAI releases a statement reaffirming its commitment to ethical AI development.</li>



<li>The company also announces a new initiative to increase collaboration with researchers.</li>
</ul>



<p><strong>November 19, 2023:</strong></p>



<ul class="wp-block-list">
<li>The situation at OpenAI remains uncertain, but there is a glimmer of hope that the organization can move forward in a positive direction.</li>



<li>The future of OpenAI hinges on its ability to balance its commercial ambitions with its ethical responsibilities.</li>
</ul>



<p><strong>November 20, 2023:</strong></p>



<ul class="wp-block-list">
<li>OpenAI continues to grapple with the challenges of balancing its commercial interests with its ethical principles.</li>



<li>The organization&#8217;s choices in the coming days will determine whether it can regain the trust of its researchers and forge a path that aligns with its original mission.</li>
</ul>



<p><strong>November 21, 2023:</strong></p>



<ul class="wp-block-list">
<li>The situation at OpenAI remains fluid, and it is unclear what the future holds for the organization.</li>



<li>However, there is a sense of renewed optimism among some employees, who believe that the company is taking steps in the right direction.</li>
</ul>
]]></content:encoded>
					
					<wfw:commentRss>https://oslogs.com/2023/11/21/openai-a-tale-of-two-eras-and-a-new-crossroads/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>MSBuild 2023 &#8211; Key moments and announcements</title>
		<link>https://oslogs.com/2023/05/26/msbuild-2023-key-moments-and-announcements/</link>
					<comments>https://oslogs.com/2023/05/26/msbuild-2023-key-moments-and-announcements/#respond</comments>
		
		<dc:creator><![CDATA[Nishant Kaushal]]></dc:creator>
		<pubDate>Fri, 26 May 2023 07:12:04 +0000</pubDate>
				<category><![CDATA[Events]]></category>
		<category><![CDATA[Windows]]></category>
		<category><![CDATA[Bing]]></category>
		<category><![CDATA[Bing chat]]></category>
		<category><![CDATA[ChatGPT]]></category>
		<category><![CDATA[Copilot]]></category>
		<category><![CDATA[GitHub]]></category>
		<category><![CDATA[MSBuild]]></category>
		<category><![CDATA[WinGet]]></category>
		<guid isPermaLink="false">https://oslogs.com/?p=2184</guid>

					<description><![CDATA[MSBuild 2023 is Microsoft&#8217;s annual affair like the Google I/O that just ended, where they announce the latest and the upcoming features in their products and hold in-depth sessions for the developers and the wider community audience. This year, the Microsoft Build 2023 keynote session witnessed CEO Satya Nadella announcing the AI copilot stack, copilot [&#8230;]]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div>
<p>MSBuild 2023 is Microsoft&#8217;s annual affair like the <a href="https://oslogs.com/tag/google-io/">Google I/O</a> that just ended, where they announce the latest and the upcoming features in their products and hold in-depth sessions for the developers and the wider community audience.</p>



<p>This year, the <a href="https://build.microsoft.com/en-US/sessions/49e81029-20f0-485b-b641-73b7f9622656" target="_blank" rel="noreferrer noopener">Microsoft Build 2023 keynote</a> session witnessed CEO Satya Nadella announcing the AI copilot stack, copilot for Windows 11, Bing search experience for ChatGPT, Microsoft Fabric, and more.</p>



<figure class="wp-block-embed is-type-rich is-provider-twitter wp-block-embed-twitter"><div class="wp-block-embed__wrapper">
<blockquote class="twitter-tweet" data-width="550" data-dnt="true"><p lang="en" dir="ltr">We&#39;re announcing more than 50 updates for developers at <a href="https://twitter.com/hashtag/MSBuild?src=hash&amp;ref_src=twsrc%5Etfw" target="_blank" rel="noopener">#MSBuild</a>, from bringing Bing to ChatGPT, to Windows Copilot, to a new Copilot Stack with common extensibility, Azure AI Studio, and Microsoft Fabric, a new data analytics platform. <a href="https://t.co/lyBsZdeBi4">https://t.co/lyBsZdeBi4</a></p>&mdash; Satya Nadella (@satyanadella) <a href="https://twitter.com/satyanadella/status/1661029988752310272?ref_src=twsrc%5Etfw" target="_blank" rel="noopener">May 23, 2023</a></blockquote><script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
</div></figure>



<p>And as the guess had to be, the biggest highlight of the event has been the complete focus on AI. As Panos Panay, Chief Product Officer, Windows and Devices says &#8220;We are just starting to see the incredible impact AI is having across industries and in our own daily lives. Today, the team and I are excited to share the next steps we are taking on our journey with Windows 11, to meet this new age of AI.&#8221;</p>



<h2 class="wp-block-heading">Windows Copilot</h2>



<p>Windows is the first PC platform to provide centralized AI assistance for customers. Together, with Bing Chat and first- and third-party plugins, <a href="https://oslogs.com/2023/05/25/what-is-windows-copilot-what-happened-to-cortana/">Windows Copilot</a> allows you to stay focused on bringing your ideas to life, completing complex projects and collaborating instead of spending energy finding, launching and working across multiple applications.</p>



<figure class="wp-block-embed is-type-rich is-provider-twitter wp-block-embed-twitter"><div class="wp-block-embed__wrapper">
<blockquote class="twitter-tweet" data-width="550" data-dnt="true"><p lang="en" dir="ltr">Introducing Windows Copilot: the first PC platform to centralize AI assistance. <a href="https://twitter.com/hashtag/MSBuild?src=hash&amp;ref_src=twsrc%5Etfw" target="_blank" rel="noopener">#MSBuild</a> <a href="https://t.co/kujctI9Tm3">pic.twitter.com/kujctI9Tm3</a></p>&mdash; Microsoft (@Microsoft) <a href="https://twitter.com/Microsoft/status/1661045178180812805?ref_src=twsrc%5Etfw" target="_blank" rel="noopener">May 23, 2023</a></blockquote><script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
</div></figure>



<p>Invoking Windows Copilot is familiar and easy – the button is front and center on your taskbar – simple to find and use. Once open, the Windows Copilot side bar stays consistent across your apps, programs and windows, always available to act as your personal assistant. It makes every user a power user, helping you take action, customize your settings and seamlessly connect across your favorite apps.</p>



<p>Windows Copilot will start to become available in preview for Windows 11 in June.</p>



<h2 class="wp-block-heading">Bing Chat plugins to Windows</h2>



<p>With Bing and ChatGPT plugins in Windows Copilot, people will not only have access to augmented AI capabilities and experiences, but you as developers will also have new ways to reach and innovate for your customers.</p>



<figure class="wp-block-embed is-type-rich is-provider-twitter wp-block-embed-twitter"><div class="wp-block-embed__wrapper">
<blockquote class="twitter-tweet" data-width="550" data-dnt="true"><p lang="en" dir="ltr">One plugin, endless opportunities. <br><br>Now you can use one platform across products like Bing Chat, ChatGPT and all of Microsoft&#39;s copilots to reach users with the ease of natural language. <a href="https://twitter.com/hashtag/MSBuild?src=hash&amp;ref_src=twsrc%5Etfw" target="_blank" rel="noopener">#MSBuild</a> <a href="https://t.co/1X40z9ihgq">pic.twitter.com/1X40z9ihgq</a></p>&mdash; Microsoft (@Microsoft) <a href="https://twitter.com/Microsoft/status/1661046009600917504?ref_src=twsrc%5Etfw" target="_blank" rel="noopener">May 23, 2023</a></blockquote><script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
</div></figure>



<h2 class="wp-block-heading">Taking Bing to ChatGPT</h2>



<p>Microsoft has been enhancing the Bing experience with the AI power of ChatGPT. Now is the time to bring some of the search engine power of Bing into ChatGPT to co-exist and enhance the features of each other for even better AI experience!</p>



<figure class="wp-block-embed is-type-rich is-provider-twitter wp-block-embed-twitter"><div class="wp-block-embed__wrapper">
<blockquote class="twitter-tweet" data-width="550" data-dnt="true"><p lang="en" dir="ltr">World-class search without ever leaving your chat window.<br> <br>We are bringing the power of Bing to ChatGPT as the default search experience. Users will have access to timelier and more up-to-date answers by enabling a plugin—all directly within chat. <a href="https://twitter.com/hashtag/MSBuild?src=hash&amp;ref_src=twsrc%5Etfw" target="_blank" rel="noopener">#MSBuild</a> <a href="https://t.co/XLyKVYYfSO">pic.twitter.com/XLyKVYYfSO</a></p>&mdash; Microsoft (@Microsoft) <a href="https://twitter.com/Microsoft/status/1661043336709373953?ref_src=twsrc%5Etfw" target="_blank" rel="noopener">May 23, 2023</a></blockquote><script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
</div></figure>



<h2 class="wp-block-heading">Dev Home</h2>



<p>This is an incredible time to be a developer on Windows. The possibilities across industries – healthcare, finance, education, tech, and others – are endless. If you are just getting started with AI or if you are wondering where to start, Microsoft and Windows are here to help you on that journey.</p>



<figure class="wp-block-embed is-type-rich is-provider-twitter wp-block-embed-twitter"><div class="wp-block-embed__wrapper">
<blockquote class="twitter-tweet" data-width="550" data-dnt="true"><p lang="en" dir="ltr">With Dev Home we introduce a new home for developers on <a href="https://twitter.com/hashtag/Windows11?src=hash&amp;ref_src=twsrc%5Etfw" target="_blank" rel="noopener">#Windows11</a>. Take a look… <a href="https://twitter.com/hashtag/MSBuild?src=hash&amp;ref_src=twsrc%5Etfw" target="_blank" rel="noopener">#MSBuild</a> <a href="https://t.co/ZAtfne4Oj3">pic.twitter.com/ZAtfne4Oj3</a></p>&mdash; Panos Panay (@panos_panay) <a href="https://twitter.com/panos_panay/status/1661033245826498560?ref_src=twsrc%5Etfw" target="_blank" rel="noopener">May 23, 2023</a></blockquote><script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
</div></figure>



<h2 class="wp-block-heading">Windows AI Library</h2>



<p>With <a href="https://learn.microsoft.com/en-us/windows/ai/" target="_blank" rel="noreferrer noopener">Windows AI library</a>, you can transform your Windows application with the power of artificial intelligence. It will house a curated collection of ready to use machine learning models and APIs that will help jumpstart your AI development.</p>



<h2 class="wp-block-heading">GitHub Copilot X</h2>



<p>Users of GitHub Copilot will be able to take advantage of natural language AI both inline and in an experimental chat experience to recommend commands, explain errors and take actions within the Terminal application.</p>



<h2 class="wp-block-heading">WinGet Configuration Files</h2>



<p>Using a <a href="https://learn.microsoft.com/en-us/windows/package-manager/configuration/" target="_blank" rel="noreferrer noopener">WinGet Configuration file</a>, you can consolidate manual machine setup and project onboarding to a single command that is reliable and repeatable.</p>



<figure class="wp-block-embed is-type-rich is-provider-twitter wp-block-embed-twitter"><div class="wp-block-embed__wrapper">
<blockquote class="twitter-tweet" data-width="550" data-dnt="true"><p lang="en" dir="ltr">With the new WinGet configuration, developers can get ready-to-code in just a few clicks. <img src="https://s.w.org/images/core/emoji/17.0.2/72x72/26a1.png" alt="⚡" class="wp-smiley" style="height: 1em; max-height: 1em;" /><img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f499.png" alt="💙" class="wp-smiley" style="height: 1em; max-height: 1em;" /><br><br>Learn more about WinGet configuration here: <a href="https://t.co/aF8uPcH4Px">https://t.co/aF8uPcH4Px</a><a href="https://twitter.com/hashtag/MSBuild?src=hash&amp;ref_src=twsrc%5Etfw" target="_blank" rel="noopener">#MSBuild</a> <a href="https://t.co/brWSGTCg65">pic.twitter.com/brWSGTCg65</a></p>&mdash; Windows Developer (@windowsdev) <a href="https://twitter.com/windowsdev/status/1661771619461656579?ref_src=twsrc%5Etfw" target="_blank" rel="noopener">May 25, 2023</a></blockquote><script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
</div></figure>



<p>At the pace of development and integration of AI into every tool that Microsoft introduces, they surely are at the forefront of the AI wave. It will be just time to wait and watch if they can maintain the same pace or get carried away with the competition that Google brings with their <a href="https://oslogs.com/tag/bard/">Bard AI</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://oslogs.com/2023/05/26/msbuild-2023-key-moments-and-announcements/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
