Currently, Wikipedia is available in almost 300 languages – but the content of all these languages is independently written and maintained. This leads to some language editions having great and up-to-date content on a broad variety of content, and other language editions with struggling having any information about the most core topics an encyclopedia should cover. Also, often knowledge about local ideas are often not available in larger language editions. This is generating and maintaining the imbalance between the different language communities on the Web. In this paper we present the idea of creating Wikipedia in a language-independent manner, which then gets rendered into natural languages when being read. This means that content in Wikipedia would be centrally maintained and everyone can contribute to it in their own language, and yet the content is available to everyone.
Billion of people who currently have no access to knowledge in their native languages could access much more knowledge than today. For many people who have traditionally been underserved by our knowledge economy, they can now, sometimes for the first time, access a large corpus of knowledge, for free. Not only can they read much more knowledge than ever before, they can also contribute their own knowledge to a central store that makes it available to everyone on the Web. Communities that have been marginalized due to their language barrier will be enabled to contribute and share in the sum of all knowledge, Wikipedia.
I work on the ontology of the Google Knowledge Graph, where I am trying to help with understanding the knowledge we already have in the Knowledge Graph, and how that can be used to serve the user better, to understand their queries and answer them.
I have studied Computer Science and Philosophy at the University of Stuttgart, Germany, received a Dr. rer.pol. at the Faculty of Economics at the KIT, and have been since working at Wikimedia, where I created Wikidata, and then Google.