Data Commons

Knowledge repository integrating open datasets From Wikipedia, the free encyclopedia

Data Commons

Data Commons is an open-source platform[1] created by Google[2] that provides an open knowledge graph, combining economic, scientific and other public datasets into a unified view.[3] Ramanathan V. Guha, a creator of web standards including RDF,[4] RSS, and Schema.org,[5] founded the project,[6] which is now led by Prem Ramaswami.[7]

Quick Facts Founder(s), Key people ...
Data Commons
Thumb
Thumb
Results for a query in Data Commons
Founder(s)Ramanathan V. Guha
Key peoplePrem Ramaswami (Head of Data Commons)
ParentGoogle
URLdatacommons.org
LaunchedMay 2018; 7 years ago (2018-05)
Close

The Data Commons website was launched in May 2018 with an initial dataset consisting of fact-checking data published in Schema.org "ClaimReview" format by several fact checkers from the International Fact-Checking Network.[8][9] Google has worked with partners such as the United Nations (UN) to populate the repository,[2] which also includes data from the United States Census, the World Bank, the US Bureau of Labor Statistics,[10] Wikipedia, the National Oceanic and Atmospheric Administration and the Federal Bureau of Investigation.[11]

The service expanded during 2019 to include an RDF-style knowledge graph populated from a number of largely statistical open datasets. The service was announced to a wider audience in 2019.[12] In 2020 the service improved its coverage of non-US datasets, while also increasing its coverage of bioinformatics and coronavirus.[13] In 2023, the service relaunched with a natural-language front end powered by a large language model.[2] It also launched as the back end to the UN data portal with Sustainable Development Goals data.[14]

Features

Data Commons places more emphasis on statistical data than is common for linked data and knowledge graph initiatives. It includes geographical, demographic, weather and real estate data alongside other categories,[3] describing states, Congressional districts, and cities in the United States as well as biological specimens, power plants, and elements of the human genome via the Encyclopedia of DNA Elements (ENCODE) project.[11] It represents data as semantic triples each of which can have its own provenance.[3] It centers on the entity-oriented integration of statistical observations from a variety of public datasets. Although it supports a subset of the W3C SPARQL query language,[15] its APIs[16] also include tools — such as a Pandas dataframe interface — oriented towards data science, statistics and data visualization.

Data Commons is integrative, meaning that it does not provide a hosting platform for different datasets, but rather attempts to consolidate much of the information provided by the datasets into a single data graph.

Technology

Data Commons is built on a graph data-model. The graph can be accessed through a browser interface and several APIs,[3][11] and is expanded through loading data (typically CSV and MCF-based templates).[17] The graph can be accessed by natural language queries in Google Search.[18] The data vocabulary used to define the datacommons.org graph is based upon Schema.org.[3] In particular the Schema.org terms StatisticalPopulation[19] and Observation[20] were proposed to Schema.org to support datacommons-like usecases.[21]

Software from the project is available on GitHub under Apache 2 license.[22]

References

Loading related searches...

Wikiwand - on

Seamless Wikipedia browsing. On steroids.