Executive Summary
This report challenges the assumption that AI and data-driven decisions are inherently neutral and reliable. It demonstrates how structural weaknesses in algorithmic systems can distort outcomes and increase business risk when left unchecked. Through critical analysis and real-world evidence, the report sheds light on the need for stronger monitoring, deeper institutional understanding of AI systems, and more deliberate oversight at the leadership level.
Introduction
There is a widespread belief that automated systems make decisions in a more rational and objective manner than humans do. Along the same lines, many of us tend to trust algorithms and their decisions, assuming they are neutral and free from bias.
Decision-making , especially fast decision-making, is central to business organizations. Over the years, we have seen many instances where business decision making has been handed off to data-driven models in various capacities. There is no doubt that AI can process data, identify trends and patterns at a scale that no human can match. However, if “AI is the affordance of human intelligence to machines” (Ma & Sun, 2020), and human intelligence is limited by context, meaning humans are neither perfectly objective nor rational, we must question the perceived objectivity and flawlessness of AI and other data-driven models.
In recent times, AI experts have emphasized the need for companies to not confuse information with judgement. This report examines AI’s role in business, why it concerns boardroom leaders, and expert suggestions to mitigate these issues.
The Illusion of Objectivity: The Dissection of AI
Errors and bias can be ingrained in AI at roughly three stages (IBM, n.d.):
Data Collection:
The biased AI often originates here. The data we use — or discard — in the machine-learning process plays a crucial role in how the AI thinks and behaves.
For example, training an AI on a company’s hiring history has shown to be disastrous. A recent University of Washington research looked at how three leading AI- language models ranked job resumes. By testing more than 550 real-world resumes and only changing the names to reflect white or black men or women, the researchers found some stark disparities: resumes with white associated names were preferred 85% of the time and those with female associated names just 11% of the time (Milne, 2024).
Data Labeling:
The process of labelling the data we train AI on can also introduce bias. The nature of training data means that we often need to put large datasets into categories. For example, a company may want to categorize online reviews of a product as positive, negative or neutral.
Consider the following review: “It does what it’s supposed to do. You can buy it, I guess.”
One individual may label this review as “positive”, while another may label it as “neutral”. Personal and cultural perspectives and context affect which “box” of training data the review goes into.
Similarly, doubts have emerged in the accuracy of facial analysis systems. A model may label a neutral facial expression from a Black individual “angry” more often than the same expression from a white individual, due to previous racial stereotypes. Due to the immense amounts of data AI is trained upon and the concerns from experts that this process is not regulated nearly enough (McClain et al., 2025), the possibility of most AI models being biased in some way or another is not far-fetched.
Model Training and Deployment:
Bias can also occur if training data does not represent diverse groups in appropriate weights or if the algorithm favors common patterns (i.e. patterns that represent the majority). After deployment, bias may still appear if real-world data differs from what we used to train the AI, the model is not tested on diverse users, or if the system is not regularly monitored and updated.
Automation Bias: The Zillow Case Study
Automation bias occurs when individuals uncritically accept AI output and overrely on automated technologies for executive decision making (IBM, n.d.). Particularly, due to the need for making a lot of high-stakes judgment calls in business, we might be tempted to turn to AI to make the process easier. However, depending on the context, this may be detrimental to the business.
In 2007, Zillow invested heavily in its iBuyer machine-learning algorithm to streamline its home-buying process (Keith, 2021). iBuyer allowed customers to receive a cash offer to sell their homes directly to investors, after which the properties could be resold for profit. Beginning in Arizona in 2018, Zillow expanded aggressively until it paused purchases in 2021 (Bahney, 2021). Although profitable at first, the AI had failed the firm by overestimating home prices, buying more homes than was logical for the business, and ignoring shifting market conditions in its valuations (Datta, 2021). Moreover, the AI failed to consider that the presence of iBuyer would not only increase home prices in the area (with homeowners fishing for higher prices), but also in the neighbouring housing markets due to local homebuyers being pushed out of the area (Harrison et al., 2024). Perhaps most importantly, the AI failed to take the phenomenon of adverse selection into account when placing bids and increasing its “inventory of homes” (Helgaker et al., 2023).
All these shortcomings culminated in Zillow incurring a loss of $881 million in 2021 and closing down the Homes segment of iBuyers in November 2021 (Keith, 2021). Shareholders also filed a lawsuit against Zillow for “failing to disclose the problems with its home flipping iBuying algorithm” (Susarla et al., 2024).
Before analysing what went wrong with the iBuyer company, we must consider the pandemic and its effect on the business. While the pandemic was relevant to Zillow's iBuyer failure, it played the role of an indicator more so than of a cause. The actual issue was Zillow’s own pricing algorithm and the company’s aggressive home‑buying strategy, which turned out to be unsustainable (Kiger, 2021). Zillow tried to become a leader in the real-estate industry without giving much importance to human expertise and relying solely on AI generated by the iBuyer algorithm. In fact, the company did not even hire full-time salaried agents until September 2020. So when the housing market shifted significantly and unexpectedly due to the pandemic, it further exposed the shortcomings of the algorithm.
Commenting on the failure of iBuyer, Co-founder Rich Barton said "Fundamentally, we have been unable to predict future pricing of homes to a level of accuracy that makes this a safe business to be in” (DeepLearning.AI, 2021). In hindsight, there were multiple ways the business model could have worked and the mistakes prevented with the help of human inputs (Susarla et al., 2024). It is arguable that had the algorithm been monitored for flaws and biases, it could have been improved to consider the relevant economic phenomenon and market factors it was overlooking in the valuation process.
Other Concerns
Besides the biases ingrained in AI algorithms and the information they neglect, there are other concerns relating to the interpretation and use of the findings of AI.
Regardless of whether we are using AI to analyse data or not, whether our data is limited or vast, the analysis will always have assumptions built into it. Decision-makers may let bad assumptions or even bad data pass either unknowingly or knowingly because “data-backed” is probably the most impressive adjective to support one's ideas (Nolis, 2018).
In boardroom settings, algorithmic outputs can carry an added sense of authority, making them more difficult to challenge. Researchers have also found that decision-makers are more inclined to accept recommendations made by an algorithm when they align with their own assumptions and biases (Alon-Barkatand Busuioc, 2023).
Possible Solutions: Conscious Business Leadership
While experts have suggested several ways to mitigate errors in AI, due the the variability in the nature of errors, there is no single solution. In the context of business, increased Data and AI literacy in professionals can reduce risks. By being critically aware of which data the AI is using in what way, to decide what exactly, we may be able to significantly reduce the skewed AI outputs we use in business.
Additionally, regular monitoring of deployed AI systems is crucial for enhancing fairness, identifying any biases and accounting for new relevant economic factors. Most importantly, keeping human expertise in the loop of critical decision making is crucial when decisions involving AI biases could have dire financial, ethical or legal consequences.
Conclusion
As AI becomes central to strategic decision-making, we are reminded time and time again that responsibility cannot be delegated to algorithms alone. Leadership must remain actively engaged in questioning outputs, scrutinizing the assumptions used, and assessing broader implications. The long-term success of a business will depend not on blind technological innovation, but on disciplined oversight, ethical awareness, and a clear understanding of where human judgment remains indispensable.
References
Bahney, A. (2021, October 19). Zillow slams the brakes on home buying as it struggles to manage its backlog of inventory. CNN. https://www.cnn.com/2021/10/18/homes/zillow-halting-home-buying/index.html
A Comprehensive Timeline Of Zillow's Misadventures In IBuying. (2021, November 3). Online Marketplaces. Retrieved February 10, 2026, from https://www.onlinemarketplaces.com/articles/timeline-of-zillow-ibuying/
Datta, A. (2021, December 13). The $500mm+ Debacle at Zillow Offers – What Went Wrong with the AI Models? inside AI News. https://insideainews.com/2021/12/13/the-500mm-debacle-at-zillow-offers-what-went-wrong-with-the-ai-models/https://insideainews.com/2021/12/13/the-500mm-debacle-at-zillow-offers-what-went-wrong-with-the-ai-models/
Harrison, D.M., Seiler, M.J. & Yang, L. The Impact of iBuyers on Housing Market Dynamics. J Real Estate Finan Econ 68, 425–461 (2024). https://doi.org/10.1007/s11146-023-09954-z
Helgaker, E., Oust, A., & Pollestad, A. J. (2023). Adverse selection in iBuyer business models—don’t buy lemons! Zeitschrift für Immobilienökonomie, 9(2), 109-138. doi:https://doi.org/10.1365/s41056-022-00065-z
Kiger, P. J. (2021, December 9). Flip Flop: Why Zillow's Algorithmic Home Buying Venture Imploded. Stanford Graduate School of Business. https://www.gsb.stanford.edu/insights/flip-flop-why-zillows-algorithmic-home-buying-venture-imploded
Kunjumuhammed, S. K., Madi, H., & Abouraia, M. (Eds.). (2024). Risks and Challenges of AI-driven Finance: Bias, Ethics, and Security. IGI Global.
Ma, L., & Sun, B. (2020). Machine learning and AI in marketing–Connecting computing power to human insights. International journal of research in marketing, 37(3), 481-504
McClain, C., Kennedy, B., Gottfried, J., Anderson, M., & Pasquini, G. (2025, April 3). How the U.S. Public and AI Experts View Artificial Intelligence. Pew Research Center. https://www.pewresearch.org/internet/2025/04/03/how-the-us-public-and-ai-experts-view-artificial-intelligence/
Milne, S. (2024, October 31). AI tools show biases in ranking job applicants’ names according to perceived race and gender. UW News. https://www.washington.edu/news/2024/10/31/ai-bias-resume-screening-race-gender/
Nolis, J. (2018, May 17). You’re relying on data too much. Medium. https://medium.com/data-science/youre-relying-on-data-too-much-250d4edc70c3
Price Prediction Turns Perilous How Covid Broke Zillow's Pricing Algorithm. (2021, November 17). DeepLearning.AI. https://www.deeplearning.ai/the-batch/price-prediction-turns-perilous/
Rouxel, C. (2026, January 23). AI Won't Fix Bad Decisions: Why Conscious Leadership Matters. Mexico Business News. https://mexicobusiness.news/talent/news/ai-wont-fix-bad-decisions-why-conscious-leadership-matters
Saar Alon-Barkat, Madalina Busuioc, Human–AI Interactions in Public Sector Decision Making: “Automation Bias” and “Selective Adherence” to Algorithmic Advice, Journal of Public Administration Research and Theory, Volume 33, Issue 1, January 2023, Pages 153–169, https://doi.org/10.1093/jopart/muac007
Shaddix, R. (2020, May 20). Your Data-Driven Decisions Are Probably Wrong. Forbes. https://www.forbes.com/sites/rebeccasadwick/2020/05/20/data-driven-decisions/
Susarla, P., Purnell, D., & Scott, K. (2024). Zillow’s artificial intelligence failure and its impact on perceived trust in information systems. Journal of Information Technology Teaching Cases, 0(0). https://doi.org/10.1177/20438869241279865
What is Data Bias? (n.d.). IBM. https://www.ibm.com/think/topics/data-bias
Next Article


