Do theverge have this big font or is something broken on my end?
You can download the entirety of Wikipedia for offline usage, BTW. I do this with an application called Kiwix https://kiwix.org/en/ .
Click “All Files” on the left menu of the program.
In the bottom search bar (there is one top and one bottom bar) type “wikipedia” to show only those entries matching the search.
Then click on the “Size” header to sort all entries by size. Usually the biggest one is the most complete.
Now “Download” it (i already have it, so it says “Open” for me).
Note that the big one with 111 GB contains images and contains all English language Wikipedia articles. The one with 43 GB should be the same I think, but without images. There are many other variants too, varying in content and theme and even build date. In example the one with “1m Top” contains the top 1 million articles only.
It doesn’t actually include all the media, and – I think – edit history. It does give you a decent offline copy of the articles with at least the thumbnails of images though.
Edit: If you want all the media from Wikimedia Commons (which may also include files that are not in Wikipedia articles directly) the stats for that are:
Total file size for all 126,598,734 files: 745,450,666,761,889 bytes (677.98 TB).
The problem with this solution is that it leaves out the most important part of Wikipedia of all; the editors. Wikipedia is a living document, constantly being updated and improved. Sure, you can preserve a fossil version of it. But if the site itself goes down then that fossil will lose value rapidly, and it’s not even going to be useful for creating a new live site because it doesn’t include the full history of articles (legally required under Wikipedia’s license) and won’t be the latest database dump from the moment that Wikipedia shut down.
Some solution is better than no solution. I don’t mind having a ‘fossil’ version for a pinch. We got along okay with hardcovered encyclopedias pre-internet and this is not that different except it still being reliant on electricity. (I have different, more valuable books on hand if we ever wind up THAT fucked.)
My point is that the alternative isn’t “no solution”, it’s “the much better database dump from Internet Archive or Wikimedia Foundation or wherever, the one that a new Wikipedia instance actually would be spun up from, not the one that you downloaded months ago and stashed in your closet.”
The fact that random people on the Internet have old copies of an incomplete, static copy of Wikipedia doesn’t really help anything. The real work that would go into bringing back Wikipedia would be creating the new hosting infrastructure capable of handling it, not trying to scrounge up a database to put on it.
Sure, but are any of these “don’t worry guys I torrented a database dump, it’s safe now” folks going to go to the trouble of actually doing that? They’re not even downloading a full backup, just the current version.
You need to devote a lot of bandwidth to keeping continuously up to date with Wikipedia. There’s only a few archives out there that are likely doing that, and of course Wikimedia Foundation and its international chapters themselves. Those are the ones who will provide the data needed to restart Wikipedia, if it actually comes to that.
Do theverge have this big font or is something broken on my end?
You can download the entirety of Wikipedia for offline usage, BTW. I do this with an application called Kiwix https://kiwix.org/en/ .
Note that the big one with 111 GB contains images and contains all English language Wikipedia articles. The one with 43 GB should be the same I think, but without images. There are many other variants too, varying in content and theme and even build date. In example the one with “1m Top” contains the top 1 million articles only.
The fact you can download the entirety of the site for 111gb sounds pretty damn impressive to me.
It doesn’t actually include all the media, and – I think – edit history. It does give you a decent offline copy of the articles with at least the thumbnails of images though.
Edit: If you want all the media from Wikimedia Commons (which may also include files that are not in Wikipedia articles directly) the stats for that are:
according to their media statistics page.
The problem with this solution is that it leaves out the most important part of Wikipedia of all; the editors. Wikipedia is a living document, constantly being updated and improved. Sure, you can preserve a fossil version of it. But if the site itself goes down then that fossil will lose value rapidly, and it’s not even going to be useful for creating a new live site because it doesn’t include the full history of articles (legally required under Wikipedia’s license) and won’t be the latest database dump from the moment that Wikipedia shut down.
Some solution is better than no solution. I don’t mind having a ‘fossil’ version for a pinch. We got along okay with hardcovered encyclopedias pre-internet and this is not that different except it still being reliant on electricity. (I have different, more valuable books on hand if we ever wind up THAT fucked.)
My point is that the alternative isn’t “no solution”, it’s “the much better database dump from Internet Archive or Wikimedia Foundation or wherever, the one that a new Wikipedia instance actually would be spun up from, not the one that you downloaded months ago and stashed in your closet.”
The fact that random people on the Internet have old copies of an incomplete, static copy of Wikipedia doesn’t really help anything. The real work that would go into bringing back Wikipedia would be creating the new hosting infrastructure capable of handling it, not trying to scrounge up a database to put on it.
Isn’t there a way to sync the copy to the current version?
Sure, but are any of these “don’t worry guys I torrented a database dump, it’s safe now” folks going to go to the trouble of actually doing that? They’re not even downloading a full backup, just the current version.
You need to devote a lot of bandwidth to keeping continuously up to date with Wikipedia. There’s only a few archives out there that are likely doing that, and of course Wikimedia Foundation and its international chapters themselves. Those are the ones who will provide the data needed to restart Wikipedia, if it actually comes to that.