Hidden AI: How Algorithms Influence our Daily Lives (Live in Bristol)
- Kieren Sharma
- Jun 28, 2024
- 4 min read
Updated: Feb 28
Hi everyone, welcome to a special recap of our first-ever live episode of Artificially Ever After, recorded at the Bristol City Hall! We were extremely honoured to be in front of a live audience for our 10th episode, diving into the growing world of hidden AI. In this episode, we explored how algorithms influence our daily lives, and we also looked ahead at the rise of generative AI and how to prepare for the coming wave.
What Do We Mean By Hidden AI?
We kicked things off by defining what exactly we mean by “hidden AI." These are the AI systems integrated into our daily routines, often without us realising we are interacting with them. Unlike visible AI, like chatbots or self-driving cars, hidden AI operates in the background, making decisions that affect our lives without our explicit knowledge.
Examples of Hidden AI include:
Recommendation systems on social media platforms
Algorithms that decide credit ratings when you apply for a loan
Payroll assessments in the justice system
We focused specifically on recommendation systems, as they are the most pervasive example of hidden AI.
Recommendation Systems: From Libraries to Social Media
We took a step back to understand how recommendation systems worked before AI. Previously, experts like librarians defined “if-then" rules to suggest books based on reader preferences. These early systems aimed to satisfy the user's specific needs. However, modern recommendation systems have shifted their focus to maximizing engagement. Now, algorithms track micro-behaviours like how long you spend on a post, rather than focusing on individual needs, with the goal of keeping you on the platform for as long as possible.
Key changes in Recommendation Systems:
Shift in Goal: From user satisfaction to maximizing engagement
Profiling: Algorithms now focus on micro-behaviours rather than user-defined preferences
Loss of Human Element: The 'expert' opinion is lost, replaced by deep-learning algorithms identifying patterns in human behaviour
This shift has led to some surprising outcomes. For instance, recommendation systems have been able to predict if someone is expecting a baby based on online activity. Platforms like TikTok have achieved incredibly high engagement rates, with some users spending over 10 hours per week on the app. This increased engagement, while beneficial for platforms, raises questions about whether these systems are truly serving us. Studies have shown that switching from chronological to algorithmic feeds leads to significantly more time spent on social media.
Expectations vs. Reality
We discussed how recommendation algorithms, initially built to increase connectivity and information sharing, have led to unexpected consequences.
Below, we highlight some key discrepancies between the initial promises made by the companies developing these systems and the actual outcomes:
Connectivity: More online interaction, but less physical time together
Information Sharing: Democratisation of publishing opinions, but also proliferation of fake news
Community Building: Access to global communities, but increased polarisation and echo chambers
The core issue is that deep-learning algorithms are designed to meet specific goals but without necessarily considering the broader impact.
Generative AI: The Next Frontier
We then looked ahead to generative AI, which is becoming increasingly integrated into society. While exciting, this technology raises new challenges. We highlighted that while generative AI is being used to democratise education, provide emotional support, and generate content faster, there are also pitfalls. One concern is the potential homogenisation of ideas if everyone uses the same AI tools.
A key point is that generative AI is becoming increasingly integrated into the hidden AI category. For instance, Google's AI overview feature provides AI-generated summaries of search results, which can sometimes lead to inaccurate information. Additionally, AI is being used in biology research to discover new antibiotics, but without fully understanding how these systems work, there are dangers.
The concern is that trust in online content could collapse. By 2026, up to 90% of online content could be synthetically generated. This has the potential to undermine the benefits of the digital world.
What Can We Do?
We explored what can be done at the government, industry, and individual level.
Possible Actions:
Government: Algorithmic transparency policies, similar to the EU's GDPR data protection laws
Industry: Slowing the “arms race" to develop the most capable AI, with more robust testing before release
Individuals: Learning the core concepts behind AI systems, making informed decisions about online behaviour and being mindful of the information you consume
Audience Questions and a Few Fun Facts
We also had some great audience questions during the live recording. The audience brought up issues such as how to optimise algorithms for positive influence, balancing efficiency with exploration, the spread of political misinformation, and how to improve algorithm satisfaction.
As is tradition, we also had our fun fact segment, which included:
A trading algorithm developed by a YouTuber using a goldfish named Frederick that actually outperformed the NASDAQ
A football team in Scotland that used an AI ball-tracking system that would often focus on the bald head of a linesman
The Japanese word for “love" is “AI"
Final Thoughts
We are in a crucial moment where we need to think critically about how we integrate AI into our lives. As we've seen with social media, technology implementation comes with responsibility. We hope this episode provided some valuable insights and encouraged everyone to engage with these issues. We'd love to hear your thoughts, so please do check out the full episode and reach out on our socials!
Where to learn more?
Some great books:
Weapons of Math Destruction by Cathy O'Neil
Automating Inequality by Virginia Eubanks
Scary Smart by Mo Gawdat
The Shortcut by Nello Cristianini
The Alignment Problem by Brian Christian
If you enjoyed reading, don’t forget to subscribe to our newsletter for more, share it with a friend or family member, and let us know your thoughts—whether it’s feedback, future topics, or guest ideas, we’d love to hear from you!
留言