AI-Based Consumer Behaviour Modelling Under Algorithmic Transparency and Regulatory Constraints: A Governance-by-Design Framework Using the EU AI Act and Digital Services Act as Benchmarks
Abstract
AI-driven consumer behaviour modelling powers targeting, ranking,
recommendations, dynamic pricing, and churn prediction, but it increasingly
operates under legal requirements for transparency, risk management, and
accountability. This paper develops a governance-by-design framework for
non‑EU jurisdictions by using the EU Artificial Intelligence Act (Regulation (EU)
2024/1689) and the EU Digital Services Act’s recommender‑system transparency
orientation as comparative benchmarks. Drawing on the OECD Recommendation
on AI and the NIST AI Risk Management Framework, we translate benchmark
obligations into implementable lifecycle controls: data governance, model
documentation, explainability, bias evaluation, audit logging, post‑deployment
monitoring, and incident response. To strengthen decision usefulness, we add
a quantitative scenario layer that compares governance tiers over a 2026–2035
horizon on expected consumer‑harm incidents, model performance retention under
drift, and a regulatory‑risk premium proxy. Results provide a modular control
architecture, an implementation sequence, and metrics to reduce regulatory and
reputational tail risks while preserving commercial effectiveness.
How to Cite
References
- European Union. (2024). Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Official Journal of the European Union.
- European Union. (2022). Regulation (EU) 2022/2065 on a Single Market for Digital Services (Digital Services Act). Official Journal of the European Union.
- OECD. (2019). Recommendation of the Council on Artificial Intelligence (OECD/LEGAL/0449). OECD Legal Instruments.
- NIST. (2023). Artificial Intelligence Risk Management Framework (AI RMF 1.0) (NIST AI 100-1). National Institute of Standards and Technology.
- NIST. (2024). NIST AI RMF Generative AI Profile (NIST-AI-600-1). National Institute of Standards and Technology.
- European Commission. (n.d.). European Centre for Algorithmic Transparency (ECAT). European Commission.
- Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). Why should I trust you?: Explaining the predictions of any classifier. In Proceedings of KDD (pp. 1135–1144).
- Lundberg, S. M., & Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Advances in Neural Information Processing Systems.
- Hardt, M., Price, E., & Srebro, N. (2016). Equality of opportunity in supervised learning. In Advances in Neural Information Processing Systems.
- Barocas, S., Hardt, M., & Narayanan, A. (2019). Fairness and machine learning: Limitations and opportunities. MIT Press.
- Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2021). A survey on bias and fairness in machine learning. ACM Computing Surveys, 54(6), 1–35.
- Selbst, A. D., Boyd, D., Friedler, S. A., Venkatasubramanian, S., & Vertesi, J. (2019). Fairness and abstraction in sociotechnical systems. In Proceedings of FAT* (pp. 59–68).
- Sweeney, L. (2013). Discrimination in online ad delivery. Communications of the ACM, 56(5), 44–54.
- Datta, A., Tschantz, M. C., & Datta, A. (2015). Automated experiments on ad privacy settings: A tale of opacity, choice, and discrimination. Proceedings on Privacy Enhancing Technologies, 2015(1), 92–112.
- Shin, D. (2021). The effects of explainability and causability on trust in AI systems. International Journal of Human–Computer Studies, 148, 102551.
- Kizilcec, R. F. (2016). How much information?: Effects of transparency on trust in an algorithmic interface. In Proceedings of CHI (pp. 2390–2395).
- Zhang, Y., Lomas, D., & Koedinger, K. (2020). Explaining recommendations: Effects on user trust and performance. ACM Transactions on Interactive Intelligent Systems, 10(4), 1–31.
- Binns, R. (2018). Fairness in machine learning: Lessons from political philosophy. In Proceedings of FAT* (pp. 149–159).
- Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim code. Polity.
- Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.
- Kleinberg, J., Ludwig, J., Mullainathan, S., & Obermeyer, Z. (2015). Prediction policy problems. American Economic Review, 105(5), 491–495.
- Sunstein, C. R. (2016). The ethics of influence: Government in the age of behavioral science. Cambridge University Press.
- Aridor, G., Che, Y.-K., Salz, T., & Zhao, Y. (2020). The economic consequences of data privacy regulation: Empirical evidence from GDPR. NBER Working Paper.
- Cremer, J., de Montjoye, Y.-A., & Schweitzer, H. (2019). Competition policy for the digital era. European Commission.
- Rexhepi, B. R., Rexhepii, F. G., Xhaferi, B., Xhaferi, S., & Berisha, B. I. (2024). Financial accounting management: A case of Ege Furniture in Kosovo. Quality – Access to Success, 25(200).
- Daci, E., & Rexhepi, B. R. (2024). The role of management in microfinance institutions in Kosovo—Case study Dukagjini Region. Quality – Access to Success, 25(202).
- Murtezaj, I. M., Rexhepi, B. R., Dauti, B., & Xhafa, H. (2024). Mitigating economic losses and prospects for the development of the energy sector in the Republic of Kosovo. Economics of Development, 23(3), 82–92.
- Murtezaj, I. M., Rexhepi, B. R., Xhaferi, B. S., Xhafa, H., & Xhaferi, S. (2024). The study and application of moral principles and values in the fields of accounting and auditing. Pakistan Journal of Life and Social Sciences, 22(2), 3885–3902.
- Rexhepi, B. R., Daci, E., Mustafa, L., & Berisha, B. I. (2024). Tax accounting in the Republic of Kosovo. Economics, Management and Sustainability, 9(3), 66–73. https://doi.org/10.14254/jems.2024.9-3.5