<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>AI &#8211; OSLogs</title>
	<atom:link href="https://oslogs.com/tag/ai/feed/" rel="self" type="application/rss+xml" />
	<link>https://oslogs.com</link>
	<description>Logging Operating System Updates</description>
	<lastBuildDate>Wed, 04 Mar 2026 13:25:47 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.1</generator>

 
	<item>
		<title>The ethical AI war &#8211; Claude, ChatGPT and Pentagon</title>
		<link>https://oslogs.com/2026/03/04/the-ethical-ai-war-claude-chatgpt-and-pentagon/</link>
					<comments>https://oslogs.com/2026/03/04/the-ethical-ai-war-claude-chatgpt-and-pentagon/#respond</comments>
		
		<dc:creator><![CDATA[Nishant Kaushal]]></dc:creator>
		<pubDate>Wed, 04 Mar 2026 13:25:39 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[Anthropic]]></category>
		<category><![CDATA[ChatGPT]]></category>
		<category><![CDATA[Claude]]></category>
		<category><![CDATA[OpenAI]]></category>
		<guid isPermaLink="false">https://oslogs.com/?p=8556</guid>

					<description><![CDATA[The year 2026 has been defined by a collision between two of the most powerful forces in the modern world: the rapidly evolving intelligence of frontier AI and the uncompromising demands of national defense. At the center of this storm is a bitter, public, and high-stakes divorce between Anthropic &#8211; the safety-focused darling of Silicon [&#8230;]]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div>
<p>The year 2026 has been defined by a collision between two of the most powerful forces in the modern world: the rapidly evolving intelligence of frontier AI and the uncompromising demands of national defense. At the center of this storm is a bitter, public, and high-stakes divorce between <strong>Anthropic</strong> &#8211; the safety-focused darling of Silicon Valley &#8211; and the <strong>U.S. Department of Defense</strong></p>



<p>What began as a pioneering partnership to put AI in the &#8220;kill chain&#8221; ended in a 5:01 p.m. ultimatum, a presidential ban, and a massive shift in the public’s loyalty. This isn&#8217;t just a corporate spat; it&#8217;s a foundational debate about who holds the &#8220;kill switch&#8221; for the most powerful technology in human history.</p>



<h2 class="wp-block-heading">The Origins of the Rift: A Partnership Built on Shaky Ground</h2>



<p>The relationship between Anthropic and the Pentagon didn&#8217;t start with hostility. In 2024, Anthropic&#8217;s Claude model became the first large language model (LLM) cleared to operate on the military&#8217;s most sensitive, classified networks. Unlike its competitors, Anthropic&#8217;s &#8220;Constitutional AI&#8221; approach—where the model is trained to follow a specific set of ethical principles—was seen as a feature, not a bug.</p>



<p>In July 2025, the Pentagon awarded Anthropic a $200 million contract to prototype &#8220;agentic AI&#8221; for national security. At the time, Anthropic CEO Dario Amodei stated the company would support &#8220;responsible AI in defense operations&#8221;. However, the fine print contained two non-negotiable &#8220;red lines&#8221;:</p>



<ul class="wp-block-list">
<li><strong>No mass domestic surveillance of American citizens.</strong></li>



<li><strong>No fully autonomous weapons systems</strong> (lethal AI that can decide to kill without a human in the loop).</li>
</ul>



<p>For a while, the arrangement worked. <strong>Claude was integrated through Palantir and used for intelligence analysis and operational planning</strong>. But in January 2026, a U.S. special operations raid in Venezuela that led to the capture of President Nicolás Maduro changed everything. Reports surfaced that Claude had been used to help plan the raid. When an Anthropic executive reportedly asked Palantir if their AI had been used in the kinetic operation, the Pentagon interpreted the inquiry as a sign that a private company was trying to &#8220;audit&#8221; or &#8220;veto&#8221; active military missions.</p>



<h2 class="wp-block-heading">The Ultimatum: What the Pentagon Demanded</h2>



<p>Following the Venezuela operation, the Department of War (DoW), led by Secretary Pete Hegseth, decided that &#8220;ideological guardrails&#8221; were a liability. On February 24, 2026, Hegseth delivered a formal demand: Anthropic must remove all usage restrictions and grant the military access to Claude for &#8220;all lawful purposes&#8221; without exception.</p>



<h3 class="wp-block-heading">Is the Pentagon&#8217;s Demand Fair?</h3>



<p>The Pentagon&#8217;s argument rests on the principle of civilian (and democratic) control. As Hegseth put it in a post on X, &#8220;The @DeptofWar will ALWAYS adhere to the law but not bend to the whims of any one for-profit tech company.&#8221;</p>



<figure class="wp-block-embed is-type-rich is-provider-twitter wp-block-embed-twitter"><div class="wp-block-embed__wrapper">
<blockquote class="twitter-tweet" data-width="550" data-dnt="true"><p lang="en" dir="ltr">It’s a shame that <a href="https://twitter.com/DarioAmodei?ref_src=twsrc%5Etfw" target="_blank" rel="noopener">@DarioAmodei</a> is a liar and has a God-complex. He wants nothing more than to try to personally control the US Military and is ok putting our nation’s safety at risk.  <br><br>The <a href="https://twitter.com/DeptofWar?ref_src=twsrc%5Etfw" target="_blank" rel="noopener">@DeptofWar</a> will ALWAYS adhere to the law but not bend to whims of any one for-profit tech… <a href="https://t.co/ZfwXG36Wvl">https://t.co/ZfwXG36Wvl</a></p>&mdash; Under Secretary of War Emil Michael (@USWREMichael) <a href="https://twitter.com/USWREMichael/status/2027211708201058578?ref_src=twsrc%5Etfw" target="_blank" rel="noopener">February 27, 2026</a></blockquote><script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
</div></figure>



<ul class="wp-block-list">
<li><strong>The &#8220;Lethality&#8221; Argument</strong>: The military argues that in a conflict with an adversary like China, milliseconds matter. If an AI detects an incoming drone swarm, it shouldn&#8217;t have to pause to &#8220;check its constitution&#8221; before authorizing a defensive strike.</li>



<li><strong>The &#8220;Law vs. Ethics&#8221; Argument</strong>: The Pentagon contends that if an action is legal under U.S. law and approved by the Commander-in-Chief, a tech CEO has no right to block it. From their perspective, Anthropic&#8217;s stance is a &#8220;master class in arrogance and betrayal&#8221;.</li>
</ul>



<p>However, critics argue that &#8220;lawful purposes&#8221; is a moving target. Laws can be reinterpreted in secret (as seen with the Patriot Act), and the Pentagon&#8217;s demand to use AI for mass surveillance of unclassified commercial data on Americans feels like a bridge too far for many civil libertarians.</p>



<h2 class="wp-block-heading">Anthropic’s Stand: A Question of Conscience</h2>



<p>Anthropic&#8217;s response was a flat &#8220;no&#8221;. On February 26, 2026, <a href="https://www.anthropic.com/news/statement-department-of-war" target="_blank" rel="noopener">Dario Amodei released a statement</a> explaining that the company &#8220;cannot in good conscience accede to their request&#8221;.</p>



<h3 class="wp-block-heading">Is Anthropic&#8217;s Stand Fair?</h3>



<p>Anthropic&#8217;s point of view is rooted in technical reality rather than just moral grandstanding. Amodei argued that:</p>



<ul class="wp-block-list">
<li><strong>AI is Unreliable</strong>: &#8220;Frontier AI systems are simply not reliable enough to power fully autonomous weapons&#8221;. In short, AI still &#8220;hallucinates&#8221;, and a hallucination in a lethal weapons system is a war crime waiting to happen.</li>



<li><strong>The Risk of Mass Surveillance</strong>: Anthropic believes that AI-driven surveillance presents &#8220;serious, novel risks to our fundamental liberties&#8221; that current laws aren&#8217;t equipped to handle.</li>
</ul>



<p>Is it fair for a company to refuse a $200 million contract? Certainly. Is it fair for them to hold &#8220;veto power&#8221; over the military? That is the billion-dollar question. Anthropic argues they aren&#8217;t vetoing the military; they are simply choosing not to be the ones who build the &#8220;Big Brother&#8221; machine.</p>



<h2 class="wp-block-heading">The Fallout: Who Benefited?</h2>



<p>When the 5:01 p.m. deadline on February 27 passed, the retaliatory strikes from the government were swift. President Trump ordered all federal agencies to cease using Anthropic’s technology and labeled the company a &#8220;supply chain risk&#8221;.</p>



<p>The &#8220;supply chain risk&#8221; designation, announced on February 27, 2026, by Secretary Pete Hegseth, represents the first time such a national security sanction &#8211; typically reserved for foreign adversaries like Huawei &#8211; has been turned against a major American technology firm.</p>



<figure class="wp-block-embed is-type-rich is-provider-twitter wp-block-embed-twitter"><div class="wp-block-embed__wrapper">
<blockquote class="twitter-tweet" data-width="550" data-dnt="true"><p lang="en" dir="ltr">This week, Anthropic delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon.<br><br>Our position has never wavered and will never waver: the Department of War must have full, unrestricted…</p>&mdash; Secretary of War Pete Hegseth (@SecWar) <a href="https://twitter.com/SecWar/status/2027507717469049070?ref_src=twsrc%5Etfw" target="_blank" rel="noopener">February 27, 2026</a></blockquote><script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
</div></figure>



<h3 class="wp-block-heading">How it Helped OpenAI</h3>



<p>Within hours of Anthropic being blacklisted, OpenAI stepped into the void. CEO Sam Altman announced a deal with the Pentagon to deploy GPT models on classified networks. OpenAI agreed to the &#8220;all lawful use&#8221; language, though Altman claimed they still shared Anthropic’s general &#8220;red lines&#8221;.</p>



<p>OpenAI&#8217;s pivot was a masterstroke of pragmatism. By saying &#8220;yes&#8221; when Anthropic said &#8220;no&#8221;, OpenAI secured its position as the primary AI partner for the U.S. government, ensuring billions in future revenue and deep integration into the state&#8217;s infrastructure. Altman described it as a move to &#8220;de-escalate&#8221; the tension between the tech industry and the government.</p>



<h3 class="wp-block-heading">How it Helped Anthropic</h3>



<p>While Anthropic lost the contract and faces a &#8220;supply chain risk&#8221; designation, it won the PR war. By being &#8220;banned&#8221; by the government for refusing to build &#8220;killer robots&#8221; and &#8220;spy tools&#8221;, Anthropic&#8217;s brand as the &#8220;ethical AI&#8221; was solidified in the public consciousness.</p>



<p>Anthropic&#8217;s stand makes it arguably &#8220;more ethical&#8221; in the eyes of those who prioritize individual rights and safety over national power. OpenAI, conversely, argues that its stance is more democratically aligned because it defers to the laws of the land rather than the personal ethics of its board.</p>



<h2 class="wp-block-heading">The Great Exodus: How Claude Became the People&#8217;s Choice</h2>



<p>The public reaction to the dispute was nothing short of a cultural phenomenon. In the days following the ban, the hashtag <strong><a href="https://x.com/hashtag/QuitGPT?src=hashtag_click">#quitGPT</a></strong> began trending. Users, fearing that OpenAI was becoming a &#8220;wing of the military&#8221;, started deleting their accounts in droves.</p>



<h3 class="wp-block-heading">The Surge of Claude</h3>



<p>According to market data from Sensor Tower, Claude overtook ChatGPT as the <strong>#1 free app on the U.S. App Store</strong> for the first time on March 2, 2026. Anthropic leaned into this, releasing a &#8220;migration tool&#8221; that allowed users to import their entire ChatGPT chat history into Claude in under a minute.</p>



<p><strong>Why did this happen?</strong></p>



<ul class="wp-block-list">
<li><strong>The &#8220;Underdog&#8221; Effect</strong>: Anthropic became the &#8220;David&#8221; fighting the &#8220;Goliath&#8221; of the Pentagon and the White House.</li>



<li><strong>The Trust Gap</strong>: As OpenAI became more secretive and government-aligned, Claude&#8217;s &#8220;Constitutional&#8221; framework felt like a transparent promise to the user.</li>



<li><strong>Performance</strong>: It didn&#8217;t hurt that Claude 4.5 (released earlier that year) was already being hailed as more &#8220;human&#8221; and less prone to the &#8220;robotic&#8221; responses of GPT-5.</li>
</ul>



<p>As of March 4, 2026, Anthropic&#8217;s revenue has ironically surged to a $20 billion run rate, largely driven by a &#8220;backlash&#8221; of public support and enterprise users who value their safety-first stance. However, the legal threat remains existential for their partnership with cloud providers like AWS.</p>



<p></p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe title="Anthropic Designated Supply Chain Risk, Loses US Work in AI Feud" width="1530" height="861" src="https://www.youtube.com/embed/Dtoco-7cV-o?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
</div></figure>



<p></p>



<p>The Anthropic &#8211; Pentagon dispute of 2026 has drawn a permanent line in the sand. On one side, we have OpenAI, the powerhouse that has chosen to be the engine of the state. On the other, we have Anthropic, which has sacrificed billions to maintain its role as the &#8220;conscientious objector&#8221; of the AI world.</p>



<p>As Secretary Hegseth noted, &#8220;Anthropic&#8217;s relationship with the U.S. Armed Forces has been permanently altered.&#8221; But so has the public&#8217;s relationship with AI. By refusing to let Claude become a weapon, Anthropic didn&#8217;t just lose a contract &#8211; it gained a movement.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://oslogs.com/2026/03/04/the-ethical-ai-war-claude-chatgpt-and-pentagon/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Are Indian youth becoming lab rats to train big AI models?</title>
		<link>https://oslogs.com/2025/11/05/are-indian-youth-becoming-lab-rats-to-train-big-ai-models/</link>
					<comments>https://oslogs.com/2025/11/05/are-indian-youth-becoming-lab-rats-to-train-big-ai-models/#comments</comments>
		
		<dc:creator><![CDATA[Nishant Kaushal]]></dc:creator>
		<pubDate>Wed, 05 Nov 2025 11:00:51 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI models]]></category>
		<category><![CDATA[ChatGPT]]></category>
		<category><![CDATA[Gemini AI]]></category>
		<category><![CDATA[Perplexity AI]]></category>
		<guid isPermaLink="false">https://oslogs.com/?p=7700</guid>

					<description><![CDATA[Overnight, the fanciest AI models that once sat behind paywalls are being handed out to millions of people in India for free. It feels like a digital coronation. This week, ChatGPT&#8217;s creator, OpenAI, announced its premium Go tier is now free for an entire year to all of India. It’s a generous gift, a digital [&#8230;]]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div>
<p>Overnight, the fanciest AI models that once sat behind paywalls are being handed out to millions of people in India for free.</p>



<p>It feels like a digital coronation. This week, <a href="https://help.openai.com/en/articles/12739021-chatgpt-go-promotion-india" target="_blank" rel="noopener">ChatGPT&#8217;s creator, OpenAI, announced its premium Go tier is now free for an entire year to all of India</a>. It’s a generous gift, a digital key to a powerful kingdom.</p>



<p>This <strong>gift</strong> doesn&#8217;t arrive in a vacuum. It lands just after <a href="https://www.perplexity.ai/help-center/en/articles/11842322-perplexity-pro-airtel-promo" target="_blank" rel="noopener">Perplexity AI offered its own Pro version for free through a partnership with Airtel</a>. And not to be outdone, <a href="https://blog.google/around-the-globe/google-asia/reliance-jio-india-partnership/" target="_blank" rel="noopener">Google has teamed up with Jio to offer its advanced AI Pro plan free</a> for 18 months, specifically targeting the 18-25 year old demographic first.</p>



<p>The giants of Silicon Valley are lining up at our digital doorstep, bearing gifts worth thousands of rupees. The message is clear: India, you are the chosen one.</p>



<p>But it does make you stop and ask, doesn&#8217;t it? Why us? Why all at once? And why… <strong>free?</strong></p>



<h2 class="wp-block-heading">The Generous Offer</h2>



<p>The official line is one of empowerment and market access. We are, after all, the world&#8217;s largest population and its fastest-growing digital market. We are a <strong>nation of 1.4 billion people</strong>, with a sea of young, ambitious, tech-savvy minds who are already adopting AI faster than almost anywhere else on Earth.</p>



<p>These companies say they want to democratize access. They want to empower the Indian student, the developer, the small business owner. They see a nation poised to build the future, and they are generously providing the tools to do so.</p>



<p>It’s a compelling story. It&#8217;s also, almost certainly, not the whole story.</p>



<p>Because when the most valuable companies in the world all decide to give away their most valuable products for free to the same 1.4 billion people at the same time, it’s not just generosity. It&#8217;s a strategy.</p>



<p>The old Silicon Valley adage was, If you&#8217;re not paying for the product, you are the product. Your data was being sold to advertisers.</p>



<p>This is something new. This isn&#8217;t just about our data. It&#8217;s about our <em>intellect</em>.</p>



<p>In this new arrangement, are we the product? Or are we the unpaid <em>workforce</em>?</p>



<h2 class="wp-block-heading">The Real Price: A Billion Trainers</h2>



<p><strong>Ask yourself:</strong> what does the company get when you test a new feature, upload a file for analysis, or rely on an AI for homework, code, or creative work? Beyond immediate usage metrics, every conversation is a training signal. User corrections, edge-case queries, slang, regional languages, and cultural references all help refine the models. Large-scale, unpaid human interaction is arguably the richest ingredient these firms need. The question then isn&#8217;t whether they value our input &#8211; of course they do &#8211; it&#8217;s whether we understand just how much of our free labor we are contributing in exchange for convenience.</p>



<p>Think about what an AI like ChatGPT or Gemini actually is. It&#8217;s not a static encyclopedia. It&#8217;s a learning system. And like any student, it learns through practice, conversation, and &#8211; most importantly &#8211; correction.</p>



<p>What does this <strong>student</strong> need to graduate from being a clever American assistant to a truly global intelligence? It needs to understand the world. And India is a classroom unlike any other.</p>



<p>We are not just a <strong>market</strong>. We are a <strong>dataset</strong>.</p>



<p>A dataset of 1.4 billion people who don&#8217;t just speak English. We speak Hinglish. We speak Thanglish, Kanglish, and Bonglish. We code-switch in the middle of a sentence, blending Hindi grammar with English vocabulary. We ask questions with a unique cultural context that a model trained on American Reddit forums could never understand.</p>



<h2 class="wp-block-heading">The Digital Treadmill</h2>



<p>They aren&#8217;t just giving us free access. They are giving it to the most active, most demanding, and most creative digital population on the planet. They are targeting the young, the developers, the <strong>knowledge workers</strong> who will push these tools to their absolute limits.</p>



<p>Is this empowerment, or is it the world&#8217;s largest, most sophisticated R&amp;D experiment?</p>



<p>Are we the valued customer at the grand opening? Or are we the lab rats, running through a digital maze while the scientists on the other side of the glass take notes?</p>



<p>The <strong>cheese</strong> is a free premium subscription. The <strong>maze</strong> is the infinite canvas of our daily work, our school projects, and our personal curiosities. And the <strong>notes</strong> are the terabytes of training data we provide, making their product smarter, more capable, and ultimately, more valuable.</p>



<p>This isn&#8217;t a secret. The search results for <strong>why India</strong> are full of corporate buzzwords that mean exactly this: we are the <strong>proving ground</strong>, the <strong>testing ground</strong> for <strong>diverse data</strong> and <strong>anomaly detection</strong>. They need us to make their AI work globally.</p>



<h2 class="wp-block-heading">The Question We Must Ask</h2>



<ul class="wp-block-list">
<li><strong>First</strong>, treat <strong>free</strong> as an invitation to look closer: who owns the model, where is data processed, and what rights does the service reserve over your inputs.</li>



<li><strong>Second</strong>, be deliberate about what you feed to these services &#8211; sensitive personal information, client data, and proprietary work belong in guarded vaults, not casual prompts.</li>



<li><strong>Third</strong>, push for transparency: if corporate playbooks rely on mass user participation to improve models, then companies should be required to disclose how user data is used, anonymized, and retained, and to offer real controls that are easy for ordinary people to use.</li>
</ul>



<p>As we all rush to claim our free year of AI-powered brilliance, we must do so with our eyes wide open. We are not just users. We are a resource. We are the trainers. We are the labor.</p>



<p>The gift has been given. The golden handshake is offered. The question we must now ask ourselves is not <strong>What can I do with this?</strong></p>



<p>The real question is: <strong>What are they doing with me?</strong></p>
]]></content:encoded>
					
					<wfw:commentRss>https://oslogs.com/2025/11/05/are-indian-youth-becoming-lab-rats-to-train-big-ai-models/feed/</wfw:commentRss>
			<slash:comments>4</slash:comments>
		
		
			</item>
		<item>
		<title>Bard in India, no more waitlist</title>
		<link>https://oslogs.com/2023/05/11/bard-in-india-no-more-waitlist/</link>
					<comments>https://oslogs.com/2023/05/11/bard-in-india-no-more-waitlist/#respond</comments>
		
		<dc:creator><![CDATA[Nishant Kaushal]]></dc:creator>
		<pubDate>Thu, 11 May 2023 11:16:53 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[Google]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[bard]]></category>
		<guid isPermaLink="false">https://oslogs.com/?p=2117</guid>

					<description><![CDATA[Bard in India, no more waitlists! As part of Google&#8217;s announcement, made during the Google I/O 2023 event, that there wont be any more waitlist process and will now be available across 180 countries! As India begins playing around with Bard, lets try to understand everything about it! What is Bard Bard is a large [&#8230;]]]></description>
										<content:encoded><![CDATA[<div id="bsf_rt_marker"></div>
<p>Bard in India, no more waitlists! As part of Google&#8217;s announcement, made during the <a href="https://oslogs.com/tag/google-io/" target="_blank" rel="noreferrer noopener">Google I/O 2023</a> event, that there wont be any more waitlist process and will now be available across 180 countries!</p>



<p>As India begins playing around with Bard, lets try to understand everything about it!</p>



<h2 class="wp-block-heading">What is Bard</h2>



<p>Bard is a large language model, also known as a conversational AI or chatbot trained to be informative and comprehensive. It is trained on a massive amount of text data, and is able to communicate and generate human-like text in response to a wide range of prompts and questions. For example, it can provide summaries of factual topics or create stories.</p>



<p>As a creative and helpful collaborator, Bard can supercharge your imagination, boost your productivity, and help you bring your ideas to life—whether you want help planning the perfect birthday party and drafting the invitation, creating a pro &amp; con list for a big decision, or understanding really complex topics simply.</p>



<h2 class="wp-block-heading">How is Bard different from ChatGPT?</h2>



<p>Bard and ChatGPT are both large language models, but they have some key differences.</p>



<p><strong>Data source</strong>: Bard is trained on an &#8220;infiniset&#8221; of data chosen to enhance its dialogue and has access to the internet in real time, whereas ChatGPT is trained on a pre-defined set of data that hasn&#8217;t been updated since 2021.<br><strong>Accuracy</strong>: Bard uses LaMDA for dialogue applications, while ChatGPT uses GPT-3.5. LaMDA was built on an open source network to understand natural language. It&#8217;s trained to find patterns in sentences and between words to create dialogue versus individual words.<br><strong>Creativity</strong>: Bard is more creative than ChatGPT, and it can generate more original text formats, like poems, code, scripts, musical pieces, email, letters, etc.</p>



<p>Overall, Bard creates more chunks of information, while ChatGPT creates content in a single text prompt.</p>



<h2 class="wp-block-heading">What is the difference between Conversational AI and Generative AI?</h2>



<p>Conversational AI is the Artificial intelligence (AI) that can engage in conversation and refers to tools that allow users to communicate with virtual assistants or chatbots. They mimic human interactions by identifying speech and text inputs and translating their contents into other languages using massive amounts of data, machine learning, and natural language processing. While Generative AI often uses deep learning techniques, like generative adversarial networks (GANs), to identify patterns and features in a given dataset before creating new data from the input data.</p>



<h2 class="wp-block-heading">Can the responses of AI be trusted?</h2>



<p>Any AI is built upon the data fed to it during its training, so any response from any AI bot should be taken with a pinch of salt and must be cross-checked. Since the AI industry is still developing, its too early to write a comparison between which AI technology is better or more accurate and which is not. </p>



<p>Accelerating people’s ideas with generative AI is truly exciting, but it’s still early days, and Bard is an experiment. While Bard has built-in safety controls and clear mechanisms for feedback in line with our <a href="https://ai.google/responsibility/principles/" target="_blank" rel="noreferrer noopener">AI Principles</a>, be aware that it may display inaccurate information or offensive statements.</p>



<h2 class="wp-block-heading">Can Bard help with coding?</h2>



<p>Yes, Bard can help with coding and topics about coding, but Bard is still experimental and you are responsible for your use of code or coding explanations. So you should use discretion and carefully test and review all code for errors, bugs, and vulnerabilities before relying on it. Code may also be subject to an open source license and Bard provides related information.</p>



<h2 class="wp-block-heading">How to access Bard?</h2>



<p><a href="https://bard.google.com/" target="_blank" rel="noreferrer noopener">Visit Google&#8217;s Bard</a> site, read and agree to some terms and conditions and you are then free to use it!</p>
]]></content:encoded>
					
					<wfw:commentRss>https://oslogs.com/2023/05/11/bard-in-india-no-more-waitlist/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
