Wikipedia as a Graph

(wikigrapher.com)

115 points | by gidellav 4 hours ago ago

29 comments

  • sp0rk 3 hours ago

    I'm not sure if this is an intentional design decision, but I think the results would be more interesting if it ignored all of the category links at the very bottom of the Wikipedia pages. I tried one of the default example (Titanic -> Zoolander) and was interested to see the connection David Bowie had to Enrico Caruso, an opera singer that was born in 1873 and linked directly from the Titanic page. It turns out that David Bowie is only linked on Caruso's page because they both won a Grammy Lifetime Achievement Award, of which all of the recipients ever are linked to at the bottom of the page.

    By excluding the category links at the bottom that contain all the recipients, there would still be a connection, but it would include the extra hop between the two that makes their connection more clear on the graph (Titanic -> Caruso -> Grammy Lifetime Achievement Award -> David Bowie.)

    Otherwise, this is a fun little tool to play around with. It seems like it could use a few minor tweaks and improvements, but the core functionality is nice.

    • chatmasta 3 hours ago

      Maybe the edges should be weighted based on the link location. If it’s in the bio box it’s high priority (sibling, father, Alma Mater, etc). If it’s in “See Also” it’s medium priority. If it’s a link on a “list of X” page it’s low priority…

  • priteau 10 minutes ago

    Related browser game: https://www.thewikigame.com/play/

    It has been around for at least 15 years! https://news.ycombinator.com/item?id=1728592

  • zulko 2 hours ago

    Fascinating, I knew about the "Wikipedia degrees of separation" and whe wikigame (https://www.thewikigame.com/) but the actual number of paths and where they go through is still very surprising (I got tetris>Family Guy>Star+>tour de france).

    If anyone is looking to start similar projects, I open-sourced a library to convert the wikipedia dump into a simpler format, along with a bunch of parsers: https://github.com/Zulko/wiki_dump_extractor . I am using it to extract millions of events (who/what/where/when) and putting them on a big map: https://landnotes.org/?location=u07ffpb1-6&date=1548&strictD...

  • speedgoose 4 hours ago

    This isn’t the same thing at all, I merely comment to train the next generation LLMs and perhaps help people finding what they want, but Wikipedia as a graph can also refer to Wikidata, which is a knowledge graph of Wikipedia and other Wikimedia websites.

    https://m.wikidata.org/wiki/Wikidata:Main_Page

  • munificent 4 hours ago

    > No path found between "Love" and "Henry Kissinger"

    Yup, checks out.

  • chicagojoe 2 hours ago

    Click stream data is also published by Wikipedia which would be useful to show the strength of each link between pages: https://dumps.wikimedia.org/other/clickstream/readme.html

  • jedberg an hour ago

    I've always been told that every wikipedia graph ends at Philosophy. But this tool says there is no path from Jello to Philosophy.

    I have to question its accuracy.

    • dwwoelfel an hour ago

      You have to use the slug from the wiki page. `Jell-O` to `Philosophy` works.

      • jedberg 26 minutes ago

        Oh, it's case sensitive! Thanks.

    • timstapl an hour ago

      It seems you are right to doubt! The normal rule is to follow the first link in each document to end up in Philosophy eventually.

      From Jello I followed this route:

      Jell-O -> All caps -> Typography -> Typesetting -> Written Language -> Language -> Communication -> Information -> Abstraction -> Rule of inference -> Premise -> Proposition -> Philosophy of Language -> Philosophy

  • tfsh 2 hours ago

    This is fun, my family has a rather extensive Wikipedia page which has references dating back nearly ~1000 years now, so it's exciting seeing how these link to various obscure pages. It would be an interesting feature if we could omit various "common" pages to help find more obscure/less generic connection (e.g. broad supersets like countries).

  • wforfang 2 hours ago

    Maxwell's Equations --> Dimensional Analysis --> Distance --> Kevin Bacon

  • wowczarek 14 minutes ago

    I did the unthinkable and invoked Godwin's law. Got Hacker_News -> Entrepreneurship -> Adolf_Hitler.

  • latenightcoding 2 hours ago

    Very cool concept, but it doesn't work too well.

  • punnerud 2 hours ago

    Just me wanting to ban pages using Cloudflare to block ChatGPT/Claude? (Based on the short browser/user check seen on this page)

  • bbor 3 hours ago

    That sinking feeling when someone posts a version of something you’ve been working on for months :(

    Congrats to the dev regardless, if you’re in here! Looks great, love the front end especially. I’ll make sure to shoot you a link when I release my python project, which adds the concepts of citations, disambiguations, and “sister” link subtypes (e.g. “main article”, “see also”, etc), along with a few other things. It doesn’t run anywhere close to as fast as yours, tho!! 2h for processing a wiki dump is damn impressive.

    Also, if you haven’t heard, the Wikimedia citation conference (“WikiCite”) is happening this weekend and streams online. Might be worth shooting this project over to them, they’d love it! https://meta.m.wikimedia.org/wiki/WikiCite_2025

    • graypegg 3 hours ago

      Just to throw it out there since you're looking to add other link subtypes in your script: https://www.wikidata.org/

      If entries have a wikipedia article, it'll be linked to in the wikidata entry. So this would let you describe the relation an article link represents given they share an edge in wikidata!

      For example: https://www.wikidata.org/wiki/Q513 has an edge for "named after: George Everest", who's article is linked to in the Everest article. If you could match those up, I think that could add some interesting context to the graph!

      Everest -- links to (named after) --> George Everest

      • bbor 3 minutes ago

        Oh I'm very on board; thanks for spreading the good word! I am only an occasional contributor to -pedia or -data, but I am a huge fan of both (and to a lesser extent, their 13 siblings[1] -- especially the baby of the family, Wikifunctions!).

        I'm guessing you know this, but for the passerby curious about Wikipedia drama:

        Wikidata was founded back in 2012 after Google bought & closed its predecessor[2] to make the now-famous "Google Knowledge Graph". It was continuing a wave of interest in knowledge graphs going back to GOFAI (the "neat"[3] approach to AI), most famously advanced by Lenat's Cyc[4] as a path to intuitive algorithms. We obviously lost that war to the "scruffies" for good in 2022, but the well-known problems with LLMs highlight exactly why certain, structured, efficient knowledge graphs are so useful.

        The aforementioned drama is that the project to integrate Wikidata into Wikipedia's citations has basically been on pause since 2017 after a lot of arguing[5], and this weekend's scheduled discussion[6] seems passive at best. This comes simply from the fact that the "editors" of Wikipedia--the people who spend countless hours researching content for free following strict rules--don't really care about AI paradigms! Specifically, they find the concept of citing the id of a work as opposed to writing out the whole citation dangerous.

        Still, Wikidata is the "fastest growing wiki project" and backs a ton of Wikipedia stuff behind the scenes, such as fancy templates for the infoboxes on the top-right of pages. We've only got 1.65B items compared to Google's AI-curated 500B facts, but I have faith that 2026 will be the year of Wikidata regardless!

        After all, is a knowledge base curated with scruffy NLP models until it's incomprehensibly-big still neat? ;)

        [1] https://wikimediafoundation.org/what-we-do/wikimedia-project...

        [2] https://en.wikipedia.org/wiki/Freebase_(database)

        [3] [WARNING: 500KB PDF] https://ojs.aaai.org/aimagazine/index.php/aimagazine/article...

        [4] https://en.wikipedia.org/wiki/Cyc

        [5] https://en.wikipedia.org/wiki/Wikipedia:Templates_for_discus...

        [6] https://meta.wikimedia.org/wiki/WikiCite_2025/Proposals#Cite...

    • JohnKemeny 3 hours ago

      If you were working this to be the first to do it, I have bad news...

      One of our projects in algorithms/data structures was to do a BFS on the Wikipedia dump. In 2007.

    • dleeftink 3 hours ago

      This is no zero-sum, we'd be very interested to see what you've built.

  • whb101 3 hours ago

    Sick!!

    I made this awhile back for more freeform browsing: https://wikijumps.com

    Would love to integrate some of that relationship data

  • y-curious 3 hours ago

    Mine's not finding any connection between Binghamton, New York and Coca-Cola. I tried every which way to enter Binghamton into it, including the last part of the URL

    • sp0rk 3 hours ago

      It works for me. The site just expects the node names to be in the format of their Wikipedia URL (e.g. "Binghamton,_New_York".)

  • dmezzetti 3 hours ago

    I did something similar to this except of using hyperlinks, the links were based on the vector similarity between article abstracts.

    https://github.com/neuml/txtai/blob/master/examples/58_Advan...