“What is Visualization?”

“What is Visualization?” by Lev Manovich (2010)

Found at http://manovich.net/index.php/projects/what-is-visualization

“What is information visualization? […] So lets start with a provisional definition that we can modify later. Lets define information visualization as a mapping between discrete data and a visual representation. We can also use different concepts besides ‘representation,’ each bringing an additional meaning.”

Information Visualization–often abbreviated infovis–is a multidisciplinary field that concerns the representation of information or data mapped as visual elements. Strict definitions elude the infovis community because of the diversity of its contextual range. Computer Scientists often classify infovis in terms of interactive displays of data, but this is a narrow view for the potential of graphic form.

The difference between the scientific visualization community and the information visualization community differ moreso in technologies and techniques applied to visualizations, rather than the visualizations themselves. Scientific visualizations developed during the 80s, while 3D technologies were being created; Infovis developed in the 90s and 2000s, when the monitor was abstracted into its natural two dimensions and the rise of big data processing capabilities.

“Infovis uses arbitrary spatial arrangements of elements to represent the relationships between data objects. Scientific, medical and geovisualization typically work with a priori fixed spatial layout of the real physical objects such as a brain, a coastline, a galaxy, etc. Since the layout in such visualizations is already fixed and can’t be arbitrary manipulated, color and/or other non-spatial parameters are used instead to show new information.”

Information Design and Information Visualization differ mostly between data structures: known or undiscovered, respectively.

“By employing graphical primitives (or, to use the language of contemporary digital media, vector graphics), infovis is able to reveal patterns and structures in the data objects that these primitives represent. However, the price being paid for this power is extreme schematization. We throw away  %99 of what is specific about each object to represent only %1- in the hope of revealing patterns across this %1 of objects’ characteristics.”

Manovich suggests that a reductionist abstraction of data is “throwing it away,” but I would argue that is making the data implicit, or potentially obscuring it. There is a valid argument to be made that most infovis efforts are a reduction of the original, but it is a optimist-pessimist dichotomy that describes the nature of the debate: one sees it as an intensification or focusing on the information that is present, the other sees it as a hiding, or disregard of information no longer present.

For years now, the infovis community has privileged encoding of spatial variables (size, position, shape curvature, motion, etc.). This hierarchy places content emphasis on these spatial variables, rather than characteristic (color, texture, transparency) variables. Historically, one can see a similar hierarchy of variables in traditional schools for painting, in which sketches are laid out elaborately first, then shading and color is layered on only after such a spatial encoding has been decided upon. Psychologically and physiologically, the basis of object recognition is closely tied or relate to 2D scene analysis–a valuable result of identification, classification, and comparison that allows to thrive and survive.

“I think that this key of spatial variables for human perception maybe the reason why all standard techniques for making graphs and charts developed in the 18th – 20th centuries use spatial dimensions to represent the key aspects of the data, and reserve other visual dimensions for less important aspects.”

At the end of the 20th century, visualization without reduction became a style all its own. Tag clouds–or word clouds–were an early form of “direct visualizations” that utilized the media of text, and left it text, but offered a new notational system of value.

In Brendan Dawes’s Cinema Redux, frames from a film are made pixelated miniatures that are arranged in a matrix. This visualization method removes one from the experience of film, but presents a visual form that permits temporal pattern recognition. Here, the reduction occurs upon the import, and it is for this reason we can keep their likeness and resist the mapping onto visual primitives. The sampling pattern (of one frame per second) is not an act of reduction, as it is an act of sampling, that still has a one-to-one representation with the source material, but it only a representational fraction of the whole. Sampling should be acknowledged, but should not disqualify a visualization as a synecdoche and lacking nuance.

Direct visualization often utilizes sampling, but does not require it; advancements in interactivity can help to hide away the entirety of raw information, revealing it only when a user has a query or chooses to zoom. While not a requirement, structure and layout of time-based variables is appropriate; space can orchestrate an appreciation for temporally-experienced patterns.

“Thus, space turns to play a crucial role in direct visualization after all: it allows us to see patterns between media elements that are normally separated by time.”

In rethinking information visualization in the modern times, one may conclude the primary focus on spatial variables in visual encoding is a relic of technological limitations.

“I believe that direct visualizations method will be particularly important for humanities, media studies and cultural institutions which now are just beginning to discoverer the use of visualization but which eventually may adopt it as a basic tool for research, teaching and exhibition of cultural artifacts.”

The future promotion of direct visualizations in the humanities and media studies will promote the deeper understanding of meaning and it connections to patterns.



Project:Thesis Readings 1

Embodiment in Data Sculpture: A Model of the Physical Visualization of Information
Jack Zhao and Andrew Vande Moere (2006)

“With human’s inherent proficiency in comprehending the physical affordances present in the real world, some researchers and designers are investigating how meaningful insights can be conveyed by way of sculpting data” (Zhao & Moere, 1).

  • Data sculpture is (1) created from data, (2) exists in space or is physical, (3) possesses both artistic and functional qualities, and (4) an attempt to make obvious the insights and relevance of the data.
  • How data is best presented to inform, educate non-expert audiences, capture attention, and maintain curiosity is largely subjective and contextual.
  • Interpretation of physical objects come from their affordances, something digital media and digital space does not inherently carry.
  • Data sculpture has the potential to communicate information to a mass, lay audience through touch, exploration, and possession. This externalization of data will now have functional and artistic qualities.


“Embodiment is based on the measurement of the distance between metaphor and data and between metaphor and reality” (Zhao & Moere, 2).

  • Qualities data sculpture can take on: physical property of depth and perspective, materiality, and nuance.
  • Data sculpture belongs to design subfields of information aesthetics, artistic visualization, or casual visualization.
  • A predecessor of data sculpture, ambient displays transform architectural space by implicating interfaces for stimulating audience’s attention where none was previously warranted.

“In data sculpture, embodiment describes the expression of abstract data in physical representation through the process of data mapping. In information visualization, and by extension, in data sculptures, data mapping describes the process of translating data values to representations using metaphors. In such processes, metaphors become manifested in representations and draw associations between the abstract data and the perceiver’s prior knowledge or experiences. Metaphor is defined as a concept that is regarded as representative or symbolic to another concept. The primary function of a metaphor is to help people conceive an unfamiliar domain in terms of another familiar domain through drawing connections of similarity between the two” (Zhao & Moere, 3).

  • In the field of tangible computing, the research into the use of metaphor has been based on the theory that users naturally relate what they are experiencing to what they already know. Stronger metaphors exist when they reference a specific mental image, afford the intended interaction and have a place in a mass audiences’ realm of familiarity.


“A more precise definition of data sculpture has emerged from the domain model: a data sculpture is a highly data-oriented physical form, possessing both artistic and functional qualities, to augment facilitates an audience’s understanding of the underlying data and issues” (Zhao & Moere, 4).

“Our model relies on following three axioms:

1. Data sculpture is a system of physical representation and abstract data coupled by a relationship called embodiment.

2. Metaphor is a contributing factor to embodiment and can be gauged by metaphorical distances from the data and reality.

3. Different modes of embodiment determined by different metaphorical distances in data sculpture can affect the informative value.”

Data Stories: Data Sculpture

Data Stores Podcast: Episode 17

Data Sculpture

State of the Art: Part 1

State of the Art: Prior Works Research
Part 1

Thesis key words: Physical data visualization, data installation, data materiality, participatory data visualization

(1) Glue Society – “BT – Longterm Investor”

A series of (TV/digital media) spots using light sculpture to present estimates of investment data.

BT Financial ‘Superannuation’ from The Glue Society on Vimeo.

Materiality: Light

Architecture: digital, sloping, flat

Interaction: None, but animated

Pros: Metaphor equating light with idea and positivity and the future intact; narration clear; mood and graphics align with intended audience

Cons: Numbers not present until the end & their scale is small; light field is essentially flat, no conceivable reason for sloping plane; low resolution data

(2) Bryan Ku – “MB15 Minos”

An interactive installation for Moving Brands that visualized staff members as codified three-dimension, brightly patterned geometric solids based on office location, department, and other facts about the employees.

Materiality: none, digital

Architecture: Operating podium, projection

Interaction: Leap System, hand movements as a signal

Pros: Design of application provided approachability to party-goers; codified system of making able to be discovered (some hints found on side of operating podium); metaphors for socialization strong; integrated live stream of party goers tweets and instagrams

Cons: Lacks materiality; spatial presence brought about by utility; installation competes with experience of party

(3) Bryan Ku – “WIM•BLE•DON”

Flipbook data visualization that operates with a pair of users alternating page turns for the final game of a Wimbledon championship match.

Materiality: Paper, bound book

Architecture: none, mostly flat

Interaction: user-operated, chronological animation

Pros: metaphor in interaction between opponents; sleek visual design; excellent source of storytelling; user-operated creates controlled experience

Cons: unsure how unguided operation would begin; lack of relationship to body or space; experience heightened greatly by video track; assumes knowledge of rules of tennis to communicate story

(4) Doug McCune – “San Francisco Housing Prices

A 3D-printed data sculpture that abstractlyd displays average price per square foot for housing in the San Francisco area.


Physicality: 3D-printed plastic

Architecture: non, ~12″ tall

Interaction: None, static

Pros: Form takes on powerful metaphor of ripping apart; content well-researched and clearly discerned from sculpture; excellent craftsmanship; process well-documented

Cons: No sense of data scale; lack of relation to human body or architecture

Process & Pitfalls: Writing in InfoVis

Process and Pitfalls in Writing Information Visualization Research Papers
Tamara Munzner (2008)

Applied Reading 1

Patrick J. O’Donnel

Munzner begins her meta-research paper, or model paper, supported by her involvement as Posters and Papers Chair of the IEEE Symposium on Information Visualization, by recognizing common pitfalls witnessed in research writing for the information visualization community.

“A good way to begin a research project is consider where you want it to end” (Munzner, 2). This advice, as logical as it may sound, gives a false sense of applicability with its proverb-like brevity. It is my interpretation that Munzner wishes to espouses is one of a researcher’s awareness of a sound argument during the project’s conception. If taken too literally, one could bias any creative effort to eschew undesired form. Instead, she most likely supports her later categorizations of papers as having validation methods unique unto themselves. Breaking ground on a new research topic without these validation methods in mind could prove fruitless, despite richness of content and discovery.

Non-Exclusive Categories of Research Papers

(1) Technique Paper

The main contribution of a technique paper is a novel algorithm or implementation. The validation methods are beyond the scope of my thesis work at this time.

(2) Design Study

New visual representations in context of a problem are the contributions of Design Studies. In order to accurately justify the visual encodings utilized, one must include brief and relevant contextual history of the problem as well as any requirements obtained through task analysis, so that the appropriateness of the solution can be appraised. Furthermore, a researcher can also conduct and include case studies, scenarios of use, or evidence of adoption by a target audience to help support their solution’s approach. This style of paper is well within my personal technical abilities and theoretical scope for my thesis.

(3) Systems Paper

A Systems Paper evaluates the use of infrastructure, framework or toolkits in software or applications. These types of papers consider choices in structure rather than visual encodings. These types of papers are not within the scope of my abilities to author with my current thesis.

(4) Evaluation Paper

Information Visualization systems and techniques are examined in use by some target population in any Evaluation Paper. Both laboratory studies of abstracted tests, and real-world behavioral field studies fall under the umbrella of this category of research paper. The lines between Evaluation Papers, Design Studies, and Ethnography can be blurry and often co-exist. This style of research is within my capabilities, but does not exclusively match the creation-of-works approach of my thesis.

(5) Model (Meta-Research) Paper

A Model Paper is considered a Meta-Research Paper because it presents formalisms and abstractions about the nature of work, production, and process. Taxonomy models seek to detail the space of some topic (such as categorization of other works). A Formalism model provides new terminology and methods by which to analyze past (and future) works. Commentary models craft an argument for a position relating to the field, much like an opinion column or advice but supported by observation, reflection and prediction. Some parts of my thesis will likely lean towards a Formalism Model paper, as it will detail my conceptual model for working with material, space, and interaction simultaneously.

Pitfalls in writing research papers come in many forms during all stages of researching and writing. Munzner suggests that many researchers fail to connect their contributions to either technique (algorithmic) or design. In a well-drafted design study paper, a well-versed information visualization professional must know how to “clearly state the problem” that can be addressed through visualization techniques, know those very techniques, and justify the technique used against other techniques in existence. When writing a paper that exists in more than one of these categories, understand which category is guiding your writing structure most, and which categories are secondary—and how to properly embed them not to distract from the primary purpose.

Justifying visual encoding and interaction methods is a necessary consideration for design study papers; do not skip discussing task analysis. Similarly, any kind of technique proposed that does not discuss who or when it might be used is hardly useful. Specificity of use case is not a requirement, but at least abstractions of tasks in domains is advised to be included in research documentation.

Visualizations in three-dimensions are often necessary when the mental model of the content must be mapped less abstractly to afford quick understanding.  When working with 3D spatial data, consider occlusion and interactivity that permits navigation of perspective. But do not assume this to be solved, as human memory is limited to make judgments from a current viewpoint to a previous viewpoint.

Research papers should not read like a manual or a journal entry; they are not exhaustive of your process, rather they are tailored and designed to make an argument. The scope of your research (and thus paper) should be self-contained and not so dense as to cover too many topics. A proper research paper should present the amount of material necessary to make your point and be able to be reproduced. To avoid missing details or including unnecessary details, consider using a sentence such as “My contribution is…” near the end of the Introduction and ensure your writings address that contribution thoroughly.

“What can we do that wasn’t possible before? How can we do something better than before? What do we know that was unknown or unclear before?” (Munzner, 12).

Convince the reader of your paper that your contributions are unique by detailing how your work differs from the established work of the intellectual community past and present. Do not simply cite previous work, explain in what ways does it not solve the problem you’ve identified. Consider grouping previous works into categories to systematically carry out analysis of each work and it’s limitations. No assertion should go unattributed. If a fact is presented as justification and no source is cited (such as “general knowledge” or “conventional wisdom”) consider deleting it, making a different justification, or searching for research on that topic. Research papers that fail to disclose reflection on their own weaknesses, limitations, or implications are seen as unfinished.

When comparing your results to other work, compare with the most up-to-date work possible. Choosing data sets to test with use-cases should be indicative of the data sets actual users would come across. Tasks used to epitomize results should be justified, in that actual users would come across the need for this procedure. Cherry-picking tasks that showcase your solutions strengths (or worse, hides its weaknesses) dilutes your results with bias.

Writing a research paper requires a calculated style that aims to produce understanding in the audience. It is often helpful to present solution descriptions in the order of what it is, why you chose it, and then how it satisfies the problem. Captions should be written in full-sentence, paragraph form so that a chart, diagram, or image could justifiably stand alone, and flipping through the paper would allow an overview via the images only. When comparing visual techniques to others, it is helpful to do so side-by-side, rather than relying on the capacity of human memory.