Call for Participation – Digital Methods Summer School 2014, On Geolocation: Remote Event Analysis (Mapping Conflicts, Disasters, Elections and other Events with Online and Social Media Data), 23 June – 4 July 2014
"The Digital Methods Initiative is a contribution to doing research into the 'natively digital'. Consider, for example, the hyperlink, the thread and the tag. Each may 'remediate' older media forms (reference, telephone chain, book index), and genealogical histories remain useful (Bolter/Grusin, 1999; Elsaesser, 2005; Kittler, 1995). At the same time new media environments – and the software–makers – have implemented these concepts, algorithmically, in ways that may resist familiar thinking as well as methods (Manovich, 2005; Fuller, 2007). In other words, the effort is not simply to import well–known methods – be they from humanities, social science or computing. Rather, the focus is on how methods may change, however slightly or wholesale, owing to the technical specificities of new media.
The initiative is twofold. First, we wish to interrogate what scholars have called 'virtual methods,' ascertaining the extent to which the new methods can stake claim to taking into account the differences that new media make (Hine, 2005). Second, we desire to create a platform to display the tools and methods to perform research that, also, can take advantage of 'web epistemology'. The web may have distinctive ways of recommending information (Rogers, 2004; Sunstein, 2006). Which digital methods innovate with and also critically display the recommender culture that is at the heart of new media information environments?
Amsterdam–based new media scholars have been developing methods, techniques and tools since 1999, starting with the Net Locator and, later, the Issue Crawler, which focuses on hyperlink analysis (Govcom.org, 1999, 2001). Since then a set of allied tools and independent modules have been made to extend the research into the blogosphere, online newssphere, discussion lists and forums, folksonomies as well as search engine behavior. These tools include scripts to scrape web, blog, news, image and social bookmarking search engines, as well as simple analytical machines that output data sets as well as graphical visualizations.
The analyses may lead to device critiques – exercises in deconstructing the political and epistemological consequences of algorithms. They may lead to critical inquiries into debates about the value and reputation of information."
"I am going to argue that 'media independence' does not just happen by itself. For a technique to work with various data types, programmers have to implement a different method for each data type. Thus, media–independent techniques are general concepts translated into algorithms, which can operate on particular data types. Let us look at some examples.
Consider the omnipresent cut and paste. The algorithm to select a word in a text document is different from the algorithm to select a curve in a vector drawing, or the algorithm to select a part of a continuous tone (i.e. raster) image. In other words, 'cut and paste' is a general concept that is implemented differently in different media software depending on which data type this software is designed to handle. (In Larry Tesler's original implementation of the universal commands concept done at PARC in 1974–5, it only worked for text editing.) Although cut, copy, paste, and a number of similar 'universal commands' are available in all contemporary GUI applications for desktop computers (but not necessarily in mobile phone apps), what they actually do and how they do it is different from application to application.
Search operates in the same way. The algorithm to search for a particular phrase in a text document is different than the algorithm that searches for a particular face in a photo or a video clip. (I am talking here about 'content–based search,' i.e. the type of search which looks for information inside actual images, as opposed to only searching image titles and other metadata the way image search engines such as Google Image Search were doing it in the 2000s.) However, despite these differences the general concept of search is the same: locating any elements of a single media object–or any media objects in a larger set–to match particular user–defined criteria. Thus we can ask the web browser to locate all instances of a particular word in a current web page; we can ask a web search engine to locate all web pages which contain a set of keywords; and we can ask a content–based image search engine to find all images that are similar in composition to an image we provided. ...
Against these historical developments, the innovation of media software clearly stands. They bring a new set of techniques which are implemented to work across all media. Searchability, findability, linkability, multimedia messaging and sharing, editing, view control, zoom and other 'mediaindependent' techniques are viruses that infect everything software touches–and therefore in their importance they can be compared to the basic organizing principles for media and artifacts which were used for thousands of years."
(Lev Manovich, 2013, pp.113–124)
Manovich, L. (2013). "Software Takes Command", Continuum.
"The use of data stored in transaction logs of Web search engines, Intranets, and Web sites can provide valuable insight into understanding the information–searching process of online searchers. This understanding can enlighten information system design, interface development, and devising the information architecture for content collections. This article presents a review and foundation for conducting Web search transaction log analysis. A methodology is outlined consisting of three stages, which are collection, preparation, and analysis. The three stages of the methodology are presented in detail with discussions of goals, metrics, and processes at each stage. Critical terms in transaction log analysis for Web searching are defined. The strengths and limitations of transaction log analysis as a research method are presented. An application to log client–side interactions that supplements transaction logs is reported on, and the application is made available for use by the research community. Suggestions are provided on ways to leverage the strengths of, while addressing the limitations of, transaction log analysis for Web–searching research. Finally, a complete flat text transaction log from a commercial search engine is available as supplementary material with this manuscript."
(Bernard J. Jansen, 2006)
Jansen, B. J. (2006). "Search log analysis: What it is, what's been done, how to do it." Library & Information Science Research 28(3): 407–432.
"We have created a metaphorical search engine. Our search algorithms generate results that assist people in the creation of new knowledge by returning disparate, but potentially metaphorically related information. These are the types of insights that are valuable for people working at the edges of their knowledge field. This is an immensely powerful creative tool for use by anyone who is looking to generate new ideas or see their problem or topic in a whole new light.
Yossarian is the main character of Joseph Heller's novel 'Catch–22.' Our work is highlighting the Catch–22 of current search and personalization algorithms, in that their use both simultaneously helps us through access to existing knowledge, and hurts us through the reinforcement of that same knowledge. In finding new and innovative search solutions to this problem, we declare that Yossarian Lives!"
"Art Photo Index (API) is a visual index of important art and documentary photographers, their images and their websites from throughout the world.
Our goal with API is to become the most useful index and search engine for discovering and exploring fine–art and documentary photography. Unlike other general purpose search engines where pertinent information is buried within the less relevant, the Art Photo Index search tool focuses only on a vetted art and documentary photographers and their work, making it the ideal search engine for our discerning audience of curators, gallery directors, publishers, editors, picture researchers, collectors and others who love art and documentary photography.
The photographers included in Art Photo Index have been selected as a result of their accomplishments in the art or documentary photography field. Many of those included have been published by major photobook publishers or serious art photography magazines. Some have received awards given by art and documentary photography organizations. Others are represented by major art photography galleries."
Fig.1 Meighan Ellis (2009). "The Assistant", Te Aro, Wellington, Aotearoa New Zealand, from the The Sitters series.