Chart Crimes: How to Catch Data Visualizations in the Act

Today’s post was written by Ashley Freeman, a first-year environmental science major and library outreach student assistant. 

“Chart Crimes: How to Catch Data Visualizations in the Act” was the William H. Library’s most recent workshop in its fall 2025 “Digital Citizenship” series. The workshop, held on Nov. 6 on Zoom, was hosted by three of LMU’s reference and instruction librarians, who demonstrated techniques for identifying the credibility of data visualizations.

The librarians initially polled attendees on their confidence in their ability to spot data manipulation in visuals, such as charts, graphs, and figures, and even in wording and interpretations of data. However, spotting biased or misleading data visualizations in the real world proved to be more complex than the participants may have believed. Data visualizations are useful tools for communicating statistical evidence, but the factors that make data easy to communicate are easily manipulable to distract viewers and misconstrue data.

Data can be uniquely displayed to highlighting key information for the viewer. Attributes such as colors, shapes, sizes, trends, patterns, and more all affect which aspects of the data the viewer’s eyes are drawn to. After going over some of the common types of graphs, their individual attributes, and the kinds of data they display the best, the librarians drew attention to the necessary chart elements that are critical for the clarity and effectiveness of the data visualization. Identifying missing elements is one of the steps of critically thinking about data and helps viewers know which visualizations are safe to trust. But not all chart errors are as easy to spot as missing axes or citation information.

The workshop presented the “D.I.G. for Data” framework as a tool observers can use to evaluate the credibility of a figure. The acronym stands for “Data source,” “Interpretation,” and “Graphics and Gauging validity.” Step one, “Data source,” requires the viewer to identify where the data was collected, how the data was collected, who collected the data, and who is represented in the sample (and conversely, who isn’t represented in the sample). The “Interpretation” step asks the viewer to question if there are any biases, assumptions, data blind-spots, or unsupported claims in the figure. This includes exploring the context of the data presented, such as the political, historical, cultural, and social. A good tip: Always ask yourself if the chart is implying any cases of “correlation equals causation” or other fallacies. Finally, the “G” in “D.I.G. for Data,” “Graphics and Gauging validity” asks the viewer to examine the design choices in the chart and if those choices accurately represent the data. Looking at figure elements like layout, scale, axes, proportion, titles, and error bars is an essential step for determining if the chart’s conclusions and presentations of data are reputable. Using this framework, one can easily identify mistakes and inaccuracies in data presentations.

To put the framework into practice, workshop participants got to partake in the “D.I.G. for Data Challenge,” which put all three steps of the data evaluating skill set to the test. The challenge asked viewers to look at real-world examples of data visualizations to see if they could spot the errors in real time. Practicing identifying misleading data in the real world is crucial, especially since it is more difficult to spot in the wild than many would think.

Ultimately, data has the power to inform, as well as mislead. Being critical and curious about the reliability of the data visualizations you see is imperative for staying educated, informed, and unbiased. Using the “D.I.G. for data” framework from the workshop is a valuable tool for evaluating data credibility. The slide deck and other materials from the workshop are available on the “Digital Citizenship Workshops” LibGuide.