• Skip to main content
  • Skip to primary sidebar
  • Skip to footer

Kan & Company

Marketing for results

  • Home
  • Our services
  • Testimonials
  • Blog
  • Social
  • Contact us
  • Search
Home » Archives for October 2025

Archives for October 2025

Echo Chambers and the Algorithmic Divide: How Social Media Polarizes Society

October 13, 2025

Protesters have been harassing and intimidating Winston Peters and his family after they were told where he lived.

In the age of algorithm-driven content, social media platforms have become less like public squares and more like curated echo chambers. What began as a promise of open dialogue has evolved into a system that rewards outrage over nuance, reinforcing users’ existing beliefs and filtering out dissenting views. The result? A society increasingly fractured—not just by ideology, but by the very information people consume.

Algorithms on platforms like Facebook, TikTok, and YouTube are designed to maximize attention. They learn what users click, like, and share, then serve up more of the same. Over time, this creates a feedback loop where users are exposed primarily to content that aligns with their views. Political groups, activists, and influencers—aware of this dynamic—have become more aggressive in pushing their narratives, often talking past one another rather than engaging in genuine debate.

This polarization is especially visible across generational lines. Young people, immersed in progressive content streams, often express disbelief at older generations’ views, unaware that their elders are seeing an entirely different digital reality. The reverse is equally true. Neither side is wrong to be confused—they’re simply not seeing the same internet.

A striking example of this divide was highlighted in a 2021 Google experiment, where two laptops—used by individuals with different browsing histories and political leanings—were placed side by side. When the same search term was entered, the results differed dramatically. One user saw mainstream news sources; the other was served partisan blogs and fringe commentary. This divergence underscores how even basic facts are filtered through personalized algorithms.

The consequences of this fragmentation are no longer theoretical. In October 2025, pro-Palestinian protesters targeted the Auckland home of Foreign Minister Winston Peters, chanting, livestreaming, and publicizing his address online 1 2. A window was smashed while his partner and a guest were inside, and his dog was injured by shattered glass 3. Peters condemned the incident as “disgraceful and blatant harassment,” warning that political activism had crossed a line into intimidation 4. The episode illustrates how moral outrage, amplified by algorithmic echo chambers, can override basic decency—even putting families at risk.

This is the first generation to encounter such technology at scale. Unlike past eras where newspapers offered shared reference points, today’s digital landscape is fragmented and personalized. The answer isn’t to abandon these platforms—but to become more mindful and savvy about how they shape perception. Users must learn to work harder to unearth alternative views, to think critically, and to question the completeness of their digital diet. Before forming opinions on controversial political, social, or religious issues, it’s essential to research widely, seek out opposing perspectives, and understand the algorithmic forces at play.

Until then, the echo chambers will persist—not because people refuse to listen, but because they no longer hear the same things.


 

 

0
0
  • Share on X (Opens in new window) X
  • Share on LinkedIn (Opens in new window) LinkedIn

Filed Under: Opinion Tagged With: AI, Ethics, Leadership, Management, Social Media, Social Media Algorithms, Values

The Double-Edged Pen: AI in Business Copywriting

October 9, 2025

Using AI for copywriting you still have to provide the facts

A cautionary tale emerged in August 2025 when Rishi Nathwani KC, a senior barrister in Victoria, Australia, was publicly reprimanded for submitting AI-generated legal arguments in a murder trial. The submissions included fabricated quotes from legislative speeches and fictitious case law, purportedly from the Supreme Court. Justice James Elliott delayed the case by 24 hours after court associates failed to locate the cited precedents. Nathwani admitted the citations “do not exist,” having assumed their accuracy based on a few verified entries. The fallout was swift and sobering: a reminder that even seasoned professionals can be misled by AI’s confident tone and polished output.

Artificial intelligence has rapidly become a staple in the business copywriter’s toolkit, offering speed, scalability, and a surprising knack for tone-matching. When used wisely, AI excels at drafting articles where the human author provides the facts, structure, and intent—allowing the machine to handle the linguistic heavy lifting. This is especially effective for internal communications, product descriptions, and marketing blogs where the subject matter is well-understood and the factual base is solid. In these cases, AI acts as a tireless assistant, rephrasing, summarizing, and formatting content with impressive efficiency.

But the reliability of AI in copywriting hinges on one critical factor: the truth must come from the human. When tasked with generating non-fiction content independently—especially in technical, legal, or historical domains—AI can veer into dangerous territory. It may fabricate plausible-sounding details, statistics, or even citations. These so-called “ghost citations” are references to sources that don’t exist, often presented with convincing formatting and tone. The risk isn’t just academic—it’s reputational.

This incident underscores a broader truth: AI is not a source of knowledge, but a pattern generator. It doesn’t “know” facts—it predicts what words are likely to follow based on its training data. In business copywriting, this means AI should be used to express what the writer already knows, not to discover or assert new truths. The best practice is to treat AI like a junior editor: helpful with phrasing, formatting, and tone—but never trusted to originate facts or verify sources.

As AI tools become more embedded in business workflows, the burden of truth remains firmly on the human. The pen may be digital, but the responsibility is not.

 

0
0
  • Share on X (Opens in new window) X
  • Share on LinkedIn (Opens in new window) LinkedIn

Filed Under: Opinion

Primary Sidebar

Book a free consultation

If you’re in Canterbury, New Zealand, sign up for a free consultation.

Recent Posts

  • Echo Chambers and the Algorithmic Divide: How Social Media Polarizes Society
  • The Double-Edged Pen: AI in Business Copywriting
  • What’s the SAVE marketing mix?
  • The Importance of Performance Management for Directors and Common Hurdles
  • Why it’s so important to discover what you’re really, really good at

Tags

AI Board of Directors Business analysis CEO Competitive strengths Copy writing Coronavirus COVID-19 Culture Customers Customer service Ethics Governance Leadership Management Marketing Marketing Consultant Organisational culture Positioning Remuneration Risk management Sales Social Media Social Media Algorithms Strategy Succession Teams Technology Values Virtual Marketing Manager Website maintenance

Archives

  • October 2025
  • April 2024
  • August 2023
  • July 2023
  • June 2023
  • May 2022
  • March 2021
  • February 2021
  • November 2020
  • June 2020
  • April 2020
  • March 2020
  • July 2019
  • May 2018
  • April 2018
  • January 2018
  • November 2017
  • October 2017
  • September 2017
  • August 2017

Footer

Contact us

If you’d like to find out more about our services and explore the possibility of us working together, get in touch. Our initial consultation is free. So you’ve nothing to lose!

Contact us

+64 (3) 669 2777
+64 (27) 433 9745
contact@kan-and-company.com

Box 37 363
Halswell
Christchurch
New Zealand 8245

Copyright © 2026 Kan & Company All Rights Reserved · Privacy Policy · Log in

 

Loading Comments...