manovich

← Back to Articles

Can We Think Without Categories?

Publication data:

Digital Culture & Society (DCS), Vol. 4, no. 1 (2018): 17-28. Special issue "Rethinking AI: Neural Networks, Biometrics and the New Artificial Intelligence." Edited by Ramón Reichert, Mathias Fuchs, Pablo Abend, Annika Richterich, and Karin Wenz. Published by Transcript-Verlag.

Learn more about the journal on its website.

Download a PDF of the table of contents and introduction to this issue.

Article introduction:

In this article I will discuss a few general challenges for Cultural Analytics research. Now that we have very large cultural data available, and our computers can do complex analysis quite quickly, how shall we look at culture? Do we only use computational methods to provide better answers to questions already established in the 19th and 20th century humanities paradigms, or do these methods allow fundamentally different new concepts?

I think that such perspectives are necessary because contemporary culture itself is now driven by the same or similar methods. So, for instance, if we want to analyse intentions, ideology and psychology of an author of certain cultural artefact or experience, this author maybe not a human but some form of AI that uses a combination of data analysis, machine learning and algorithmic generation. In fact, the difference between using computational methods and concepts to analyse cultural data today vs. twenty years ago is that now these methods and concepts are driving everyday digital culture lived by billions of people. When small numbers of humanists and social scientists were analysing cultural data with computers in the second part of the 20th century, their contemporary culture was mostly analogue, physical, and non-quantifiable. But today we as academic researchers live in the “shadow” of a world of social networks, recommendations, apps, and interfaces that all use media analytics. As I already explained, I see media analytics as the new stage in the development of modern technological media. This stage is characterized by algorithmic large-scale analysis of media and user interactions and the use of the results in algorithmic decision making such as contextual advertising, recommendations, search, and other kinds of information retrieval, filtering of search results and user posts, document classification, plagiarism detection, video fingerprinting, content categorization of user photos, automatic news production etc.

And we are still only at the beginning of this stage. Given the trajectory of gradual automation of more and more functions in modern society using algorithms, I expect that production and customization of many forms of at least “commercial culture” (characterized by conventions, genre expectations, and templates) will also be gradually automated. So, in the future already developed digital distribution platforms and media analytics will be joined by the third part: algorithmic media generation. We can see this at work already today in automatically generated news stories, online content written about topics suggested by algorithms, production of some television shows, and TV broadcasts during sport events where multiple robotic cameras automatically follow and zoom into dynamic human performances.

Until ten years ago, key cultural techniques we used to represent and reason about the world and other humans included natural languages, lens-based photo and video imaging, various other media for preserving and accessing information, calculus, digital computers, and computer networks. The core concepts of data/AI society are now as important. They form data society’s “mind”—the particular ways of encountering, understanding, and acting on the world and the humans. And this is why even if you have no intention of doing practical Cultural Analytics research yourself, you need anyway to become familiar with these new data-centred cultural techniques.

While both media analytics in industry and Cultural Analytics research use dozens of algorithms, behind them there is a small number of fundamental paradigms. We can think them as types of data/AI society’s cognition. The three most general ones are data visualization, unsupervised machine learning, and supervised machine learning. Others are feature extraction, clustering, dimension reduction, classification, regression, network science, time series analysis, and information retrieval.

Article  2018