ASCII by Jason Scott

Jason Scott's Weblog

The Followup —

Writing about my heart attack garnered some attention. I figured it was only right to fill in later details and describe what my current future plans are.

After the previous entry, I went back into the emergency room of the hospital I was treated at, twice.

The first time was because I “felt funny”; I just had no grip on “is this the new normal” and so just to understand that, I went back in and got some tests. They did an EKG, a blood test, and let me know all my stats were fine and I was healing according to schedule. That took a lot of stress away.

Two days later, I went in because I was having a marked shortness of breath, where I could not get enough oxygen in and it felt a little like I was drowning. Another round of tests, and one of the cardiologists mentioned a side effect of one of the drugs I was taking was this sort of shortness/drowning. He said it usually went away and the company claimed 5-7% of people got this side effect, but that they observed more like 10-15%. They said I could wait it out or swap drugs. I chose swap. After that, I’ve had no other episodes.

The hospital thought I should stay in Australia for 2 weeks before flying. Thanks to generosity from both MuseumNext and the ACMI, my hosts, that extra AirBnB time was basically paid for. MuseumNext also worked to help move my international flight ahead the weeks needed; a very kind gesture.

Kind gestures abounded, to be clear. My friend Rochelle extended her stay from New Zealand to stay an extra week; Rachel extended hers to match my new departure date. Folks rounded up funds and sent them along, which helped cover some additional costs. Visitors stopped by the AirBnB when I wasn’t really taking any walks outside, to provide additional social contact.

Here is what the blockage looked like, before and after. As I said, roughly a quarter of my heart wasn’t getting any significant blood and somehow I pushed through it for nearly a week. The insertion of a balloon and then a metal stent opened the artery enough for the blood flow to return. Multiple times, people made it very clear that this could have finished me off handily, and mostly luck involving how my body reacted was what kept me going and got me in under the wire.

From the responses to the first entry, it appears that a lot of people didn’t know heart attacks could be a lingering, growing issue and not just a bolt of lightning that strikes in the middle of a show or while walking down the street. If nothing else, I’m glad that it’s caused a number of people to be aware of how symptoms portray each other, as well as getting people to check up cholesterol, which I didn’t see as a huge danger compared to other factors, and which turned out to be significant indeed.

As for drugs, I’ve got a once a day waterfall of pills for blood pressure, cholesterol, heart healing, anti-clotting, and my long-handled annoyances of gout (which I’ve not had for years thanks to the pills). I’m on some of them for the next few months, some for a year, and some forever. I’ve also been informed I’m officially at risk for another heart attack, but the first heart attack was my hint in that regard.

As I healed, and understood better what was happening to me, I got better remarkably quick. There is a single tiny dot on my wrist from the operation, another tiny dot where the IV was in my arm at other times. Rachel gifted a more complicated Fitbit to replace the one I had, with the new one tracking sleep schedule and heart rate, just to keep an eye on it.

A day after landing back in the US, I saw a cardiologist at Mt. Sinai, one of the top doctors, who gave me some initial reactions to my charts and information: I’m very likely going to be fine, maybe even better than before. I need to take care of myself, and I was. If I was smoking or drinking, I’d have to stop, but since I’ve never had alcohol and I’ve never smoked, I’m already ahead of that game. I enjoy walking, a lot. I stay active. And as of getting out of the hospital, I am vegan for at least a year. Caffeine’s gone. Raw vegetables are in.

One might hesitate putting this all online, because the Internet is spectacularly talented at generating hatred and health advice. People want to help – it comes from a good place. But I’ve got a handle on it and I’m progressing well; someone hitting me up with a nanny-finger-wagging paragraph and 45 links to change-your-life-buy-my-book.com isn’t going to help much. But go ahead if you must.

I failed to mention it before, but when this was all going down, my crazy family of the Internet Archive jumped in, everyone from Dad Brewster through to all my brothers and sisters scrambling to find me my insurance info and what they had on their cards, as I couldn’t find mine. It was something really late when I first pinged everyone with “something is not good” and everyone has been rather spectacular over there. Then again, they tend to be spectacular, so I sort of let that slip by. Let me rectify that here.

And now, a little bit on health insurance.

I had travel insurance as part of my health insurance with the Archive. That is still being sorted out, but a large deposit had to be put on the Archive’s corporate card as a down-payment during the sorting out, another fantastic generosity, even if it’s technically a loan. I welcome the coming paperwork and nailing down of financial brass tacks for a specific reason:

I am someone who once walked into an emergency room with no insurance (back in 2010), got a blood medication IV, stayed around a few hours, and went home, generating a $20,000 medical bill in the process. It got knocked down to $9k over time, and I ended up being thrown into a low-income program they had that allowed them to write it off (I think). That bill could have destroyed me, financially. Therefore, I’m super sensitive to the costs of medical care.

In Australia, it is looking like the heart operation and the 3 day hospital stay, along with all the tests and staff and medications, are going to round out around $10,000 before the insurance comes in and knocks that down further (I hope). In the US, I can’t imagine that whole thing being less than $100,000.

The biggest culture shock for me was how little any of the medical staff, be they doctors or nurses or administrators, cared about the money. They didn’t have any real info on what things cost, because pretty much everything is free there. I’ve equating it to asking a restaurant where the best toilets to use a few hours after your meal – they might have some random ideas, but nobody’s really thinking that way. It was a huge factor in my returning to the emergency room so willingly; each visit, all-inclusive, was $250 AUD, which is even less in US dollars. $250 is something I’ll gladly pay for peace of mind, and I did, twice. The difference in the experince is remarkable. I realize this is a hot button issue now, but chalk me up as another person for whom a life-changing experience could come within a remarkably close distance of being an influence on where I might live in the future.

Dr. Sonny Palmer, who did insertion of my stent in the operating room.

I had a pile of plans and things to get done (documentaries, software, cutting down on my possessions, and so on), and I’ll be getting back to them. I don’t really have an urge to maintain some sort of health narrative on here, and I certainly am not in the mood to urge any lifestyle changes or preach a way of life to folks. I’ll answer questions if people have them from here on out, but I’d rather be known for something other than powering through a heart attack, and maybe, with some effort, I can do that.

Thanks again to everyone who has been there for me, online and off, in person and far away, over the past few weeks. I’ll try my best to live up to your hopes about what opportunities my second chance at life will give me.

 


The Other Half —

On January 19th of this year, I set off to California to participate in a hastily-arranged appearance in a UCLA building to talk about saving climate data in the face of possible administrative switchover. I wore a fun hat, stayed in a nice hotel, and saw an old friend from my MUD days for dinner. The appearance was a lot of smart people doing good work and wanting to continue with it.

While there, I was told my father’s heart surgery, which had some complications, was going to require an extended stay and we were running out of relatives and companions to accompany him. I booked a flight for seven hours after I’d arrive back in New York to go to North Carolina and stay with him. My father has means, so I stayed in a good nearby hotel room. I stayed with him for two and a half weeks, booking ten to sixteen hour days to accompany him through a maze of annoyances, indignities, smart doctors, variant nurses ranging from saints to morons, and generally ensure his continuance.

In the middle of this, I had a non-movable requirement to move the manuals out of Maryland and send them to California. Looking through several possibilities, I settled with: Drive five hours to Maryland from North Carolina, do the work across three days, and drive back to North Carolina. The work in Maryland had a number of people helping me, and involved pallet jacks, forklifts, trucks, and crazy amounts of energy drinks. We got almost all of it, with a third batch ready to go. I drove back the five hours to North Carolina and caught up on all my podcasts.

I stayed with my father another week and change, during which I dented my rental car, and hit another hard limit: I was going to fly to Australia. I also, to my utter horror, realized I was coming down with some sort of cold/flu. I did what I could – stabilized my father’s arrangements, went into the hotel room, put on my favorite comedians in a playlist, turned out the lights, drank 4,000mg of Vitamin C, banged down some orange juice, drank Mucinex, and covered myself in 5 blankets. I woke up 15 hours later in a pool of sweat and feeling like I’d crossed the boundary with that disease. I went back to the hospital to assure my dad was OK (he was), and then prepped for getting back to NY, where I discovered almost every flight for the day was booked due to so many cancelled flights the previous day.

After lots of hand-wringing, I was able to book a very late flight from North Carolina to New York, and stayed there for 5 hours before taking a 25 hour two-segment flight through Dubai to Melbourne.

I landed in Melbourne on Monday the 13th of February, happy that my father was stable back in the US, and prepping for my speech and my other commitments in the area.

On Tuesday I had a heart attack.

We know it happened then, or began to happen, because of the symptoms I started to show – shortness of breath, a feeling of fatigue and an edge of pain that covered my upper body like a jacket. I was fucking annoyed – I felt like I was just super tired and needed some energy, and energy drinks and caffiene weren’t doing the trick.

I met with my hosts for the event I’d do that Saturday, and continued working on my speech.

I attended the conference for that week, did a couple interviews, saw some friends, took some nice tours of preservation departments and discussed copyright with very smart lawyers from the US and Australia.

My heart attack continued, blocking off what turned out to be a quarter of my bloodflow to my heart.

This was annoying me but I didn’t know it was, so according to my fitbit I walked 25 miles, walked up 100 flights of stairs, and maintained hours of exercise to snap out of it, across the week.

I did a keynote for the conference. The next day I hosted a wonderful event for seven hours. I asked for a stool because I said I was having trouble standing comfortably. They gave me one. I took rests during it, just so the DJ could get some good time with the crowds. I was praised for my keeping the crowd jumping and giving it great energy. I’d now had been having a heart attack for four days.

That Sunday, I walked around Geelong, a lovely city near Melbourne, and ate an exquisite meal at Igni, a restaurant whose menu basically has one line to tell you you’ll be eating what they think you should have. Their choices were excellent. Multiple times during the meal, I dozed a little, as I was fatigued. When we got to the tram station, I walked back to the apartment to get some rest. Along the way, I fell to the sidewalk and got up after resting.

I slept off more of the growing fatigue and pain.

The next day I had a second exquisite meal of the trip at Vue Le Monde, a meal that lasted from about 8pm to midnight. My partner Rachel loves good meals and this is one of the finest you can have in the city, and I enjoyed it immensely. It would have been a fine last meal. I’d now had been experiencing a heart attack for about a week.

That night, I had a lot of trouble sleeping. The pain was now a complete jacket of annoyance on my body, and there was no way to rest that didn’t feel awful. I decided medical attention was needed.

The next morning, Rachel and I walked 5 blocks to a clinic, found it was closed, and walked further to the RealCare Health Clinic. I was finding it very hard to walk at this point. Dr. Edward Petrov saw me, gave me some therapy for reflux, found it wasn’t reflux, and got concerned, especially as having my heart checked might cost me something significant. He said he had a cardiologist friend who might help, and he called him, and it was agreed we could come right over.

We took a taxi over to Dr. Georg Leitl’s office. He saw me almost immediately.

He was one of those doctors that only needed to take my blood pressure and check my heart with a stethoscope for 30 seconds before looking at me sadly. We went to his office, and he told me I could not possibly get on the plane I was leaving on in 48 hours. He also said I needed to go to Hospital very quickly, and that I had some things wrong with me that needed attention.

He had his assistants measure my heart and take an ultrasound, wrote something on a notepad, put all the papers in an envelope with the words “SONNY PALMER” on them, and drove me personally over in his car to St. Vincent’s Hospital.

Taking me up to the cardiology department, he put me in the waiting room of the surgery, talked to the front desk, and left. I waited 5 anxious minutes, and then was bought into a room with two doctors, one of whom turned out to be Dr. Sonny Palmer.

Sonny said Georg thought I needed some help, and I’d be checked within a day. I asked if he’d seen the letter with his name on it. He hadn’t. He went and got it.

He came back and said I was going to be operated on in an hour.

He also explained I had a rather blocked artery in need of surgery. Survival rate was very high. Nerve damage from the operation was very unlikely. I did not enjoy phrases like survival and nerve damage, and I realized what might happen very shortly, and what might have happened for the last week.

I went back to the waiting room, where I tweeted what might have been my possible last tweets, left a message for my boss Alexis on the slack channel, hugged Rachel tearfully, and then went into surgery, or potential oblivion.

Obviously, I did not die. The surgery was done with me awake, and involved making a small hole in my right wrist, where Sonny (while blasting Bon Jovi) went in with a catheter, found the blocked artery, installed a 30mm stent, and gave back the blood to the quarter of my heart that was choked off. I listened to instructions on when to talk or when to hold myself still, and I got to watch my beating heart on a very large monitor as it got back its function.

I felt (and feel) legions better, of course – surgery like this rapidly improves life. Fatigue is gone, pain is gone. It was also explained to me what to call this whole event: a major heart attack. I damaged the heart muscle a little, although that bastard was already strong from years of high blood pressure and I’m very young comparatively, so the chances of recovery to the point of maybe even being healthier than before are pretty good. The hospital, St. Vincents, was wonderful – staff, environment, and even the food (incuding curry and afternoon tea) were a delight. My questions were answered, my needs met, and everyone felt like they wanted to be there.

It’s now been 4 days. I was checked out of the hospital yesterday. My stay in Melbourne was extended two weeks, and my hosts (MuseumNext and ACMI) paid for basically all of the additional AirBNB that I’m staying at. I am not cleared to fly until the two weeks is up, and I am now taking six medications. They make my blood thin, lower my blood pressure, cure my kidney stones/gout, and stabilize my heart. I am primarily resting.

I had lost a lot of weight and I was exercising, but my cholesterol was a lot worse than anyone really figured out. The drugs and lifestyle changes will probably help knock that back, and I’m likely to adhere to them, unlike a lot of people, because I’d already been on a whole “life reboot” kick. The path that follows is, in other words, both pretty clear and going to be taken.

Had I died this week, at the age of 46, I would have left behind a very bright, very distinct and rather varied life story. I’ve been a bunch of things, some positive and negative, and projects I’d started would have lived quite neatly beyond my own timeline. I’d have also left some unfinished business here and there, not to mention a lot of sad folks and some extremely quality-variant eulogies. Thanks to a quirk of the Internet Archive, there’s a little statue of me – maybe it would have gotten some floppy disks piled at its feet.

Regardless, I personally would have been fine on the accomplishment/legacy scale, if not on the first-person/relationships/plans scale. That my Wikipedia entry is going to have a different date on it than February 2017 is both a welcome thing and a moment to reflect.

I now face the Other Half, whatever events and accomplishments and conversations I get to engage in from this moment forward, and that could be anything from a day to 100 years.

Whatever and whenever that will be, the tweet I furiously typed out on cellphone as a desperate last-moment possible-goodbye after nearly a half-century of existence will likely still apply:

“I have had a very fun time. It was enormously enjoyable, I loved it all, and was glad I got to see it.”

 


Now That’s What I Call Script-Assisted-Classified Pattern Recognized Music —

Merry Christmas; here is over 500 days (12,000 hours) of music on the Internet Archive.

Go choose something to listen to while reading the rest of this. I suggest either something chill or perhaps this truly unique and distinct ambient recording.

 

Let’s be clear. I didn’t upload this music, I certainly didn’t create it, and actually I personally didn’t classify it. Still, 500 Days of music is not to be ignored. I wanted to talk a little bit about how it all ended up being put together in the last 7 days.

One of the nice things about working for a company that stores web history is that I can use it to do archaeology against the company itself. Doing so, I find that the Internet Archive started soliciting “the people” to begin uploading items en masse around 2003. This is before YouTube, and before a lot of other services out there.

I spent some time tracking dates of uploads, and you can see various groups of people gathering interest in the Archive as a file destination in these early 00’s, but a relatively limited set all around.

Part of this is that it was a little bit of a non-intuitive effort to upload to the Archive; as people figured it all out, they started using it, but a lot of other people didn’t. Meanwhile, Youtube and other also-rans come into being and they picked up a lot of the “I just want to put stuff up” crowd.

By 2008, things start to take off for Internet Archive uploads. By 2010, things take off so much that 2008 looks like nothing. And now it’s dozens or hundreds of uploads of multi-media uploads a day through all the Archive’s open collections, not to count others who work with specific collections they’ve been given administration of.

In the case of the general uploads collection of audio, which I’m focusing on in this entry, the number of items is now at over two million.

This is not a sorted, curated, or really majorly analyzed collection, of course. It’s whatever the Internet thought should be somewhere. And what ideas they have!

Quality is variant. Finding things is variant, although the addition of new search facets and previews have made them better over the years.

I decided to do a little experiment: slight machine-assisted “find some stuff” sorting. Let it loose on 2 million items in the hopper, see what happens. The script was called Cratedigger.

Previously, I did an experiment against keywording on texts at the archive – the result was “bored intern” level, which was definitely better than nothing, and in some cases, that bored internet could slam through a 400 page book and determine a useful word cloud in less than a couple seconds. Many collections of items I uploaded have these word clouds now.

It’s a little different with music. I went about it this way with a single question:

  • Hey, uploader – could you be bothered to upload a reference image of some sort as well as your music files? Welcome to Cratediggers.

Cratediggers is not an end-level collection – it’s a holding bay to do additional work, but it does show the vast majority of people would upload a sound file and almost nothing else. (I’ve not analyzed quality of description metadata in the no-image items – that’ll happen next.) The resulting ratio of items-in-uploads to items-for-cratediggers is pretty striking – less than 150,000 items out of the two million passed this rough sort.

The Bored Audio Intern worked pretty OK. By simply sending a few parameters, The Cratediggers Collection ended up building on itself by the thousands without me personally investing time. I could then focus on more specific secondary scripts that do things and an even more lazy manner, ensuring laziness all the way down.

The next script allowed me to point to an item in the cratediggers collection and say “put everything by this uploader that is in Cratediggers into this other collection”, with “this other collection” being spoken word, sermons, or music. In general, a person who uploaded music that got into Cratediggers generally uploaded other music. (Same with sermons and spoken word.) It worked well enough that as I ran these helper scripts, they did amazingly well. I didn’t have to do much beyond that.

As of this writing, the music collection contains over 400 solid days of Music. They are absolutely genre-busting, ranging from industrial and noise all the way through beautiful Jazz and acapella. There are one-of-a-kind Rock and acoustic albums, and simple field recordings of Live Events.

And, ah yes, the naming of this collection… Some time ago I took the miscellaneous texts and writings and put them into a collection called Folkscanomy.

After trying to come up with the same sort of name for sound, I discovered a very funny thing: you can’t really attached any two words involving sound together and not already have some company that has the name of Manufacturers using it. Trust me.

And that’s how we ended up with Folksoundomy.

What a word!

The main reason for this is I wanted something unique to call this collection of uploads that didn’t imply they were anything other than contributed materials to the Archive. It’s a made-up word, a zesty little portmanteau that is nowhere else on the Internet (yet). And it leaves you open for whatever is in them.

So, about the 500 days of music:

Absolutely, one could point to YouTube and the mass of material being uploaded there as being superior to any collection sitting on the archive. But the problem is that they have their own robot army, which is a tad more evil than my robotic bored interns; you have content scanners that have both false positives and strange decorations, you have ads being put on the front of things randomly, and you have a whole family of other small stabs and Jabs towards an enjoyable experience getting in your way every single time. Internet Archive does not log you, require a login, or demand other handfuls of your soul. So, for cases where people are uploading their own works and simply want them to be shared, I think the choice is superior.

This is all, like I said, an experiment – I’m sure the sorting has put some things in the wrong place, or we’re missing out on some real jewels that didn’t think to make a “cover” or icon to the files. But as a first swipe, I moved 80,000 items around in 3 days, and that’s more than any single person can normally do.

There’s a lot more work to do, but that music collection is absolutely filled with some beautiful things, as is the whole general Folksoundomy collection. Again, none of this is me, or some talent I have – this is the work of tens of thousands of people, contributing to the Archive to make it what it is, and while I think the Wayback Machine has the lion’s share of the Archive’s world image (and deserves it), there’s years of content and creation waiting to be discovered for anyone, or any robot, that takes a look.


Back That Thing Up —

img_3918

I’m going to mention two backup projects. Both have been under way for some time, but the world randomly decided the end of November 2016 was the big day, so here I am.

The first is that the Internet Archive is adding another complete mirror of the Wayback machine to one of our satellite offices in Canada. Due to the laws of Canada, to be able to do “stuff” in the country, you need to set up a separate company from your US concern. If you look up a lot of major chains and places, you’ll find they all have Canadian corporations. Well, so does the Internet Archive and that separate company is in the process of getting a full backup of the Wayback machine and other related data. It’s 15 petabytes of material, or more. It will cost millions of dollars to set up, and that money is already going out the door.

So, if you want, you can go to the donation page and throw some money in that direction and it will make the effort go better. That won’t take very long at all and you can feel perfectly good about yourself. You need read no further, unless you have an awful lot of disk space, at which point I suggest further reading.

8847193192_f56971c32d_b

Whenever anything comes up about the Internet Archive’s storage solutions, there’s usually a fluttery cloud of second-guessing and “big sky” suggestions about how everything is being done wrong and why not just engage a HBF0_X2000-PL and fark a whoziz and then it’d be solved. That’s very nice, but there’s about two dozen factors in running an Internet Archive that explain why RAID-1 and Petabyte Towers combined with self-hosting and non-cloud storage has worked for the organization. There are definitely pros and cons to the whole thing, but the uptime has been very good for the costs, and the no-ads-no-subscription-no-login model has been working very well for years. I get it – you want to help. You want to drop the scales from our eyes and you want to let us know about the One Simple Trick that will save us all.

That said, when this sort of insight comes out, it’s usually back-of-napkin and done by someone who will be volunteering several dozen solutions online that day, and that’s a lot different than coming in for a long chat to discuss all the needs. I think someone volunteering a full coherent consult on solutions would be nice, but right now things are working pretty well.

There are backups of the Internet Archive in other countries already; we’re not that bone stupid. But this would be a full, consistently, constantly maintained full backup in Canada, and one that would be interfaced with other worldwide stores. It’s a preparation for an eventuality that hopefully won’t come to pass.

There’s a climate of concern and fear that is pervading the landscape this year, and the evolved rat-creatures that read these words in a thousand years will be able to piece together what that was. But regardless of your take on the level of concern, I hope everyone agrees that preparation for all eventualities is a smart strategy as long as it doesn’t dilute your primary functions. Donations and contributions of a monetary sort will make sure there’s no dilution.

So there’s that.

Now let’s talk about the backup of this backup a great set of people have been working on.

vj243rffv_ilmy-qedvtkw_r

About a year ago, I helped launch INTERNETARCHIVE.BAK. The goal was to create a fully independent distributed copy of the Internet Archive that was not reliant on a single piece of Internet Archive hardware and which would be stored on the drives of volunteers, with 3 geographically distributed copies of the data worldwide.

Here’s the current status page of the project. We’re backing up 82 terabytes of information as of this writing. It was 50 terabytes last week. My hope is that it will be 1,000 terabytes sooner rather than later. Remember, this is 3 copies, so to do each terabyte needs three terabytes.

For some people, a terabyte is this gigantically untenable number and certainly not an amount of disk space they just have lying around. Other folks have, at their disposal, dozens of terabytes. So there’s lots of hard drive space out there, just not evenly distributed.

The IA.BAK project is a complicated one, but the general situation is that it uses the program git-annex to maintain widely-ranged backups from volunteers, with “check-in” of data integrity on a monthly basis. It has a lot of technical meat to mess around with, and we’ve had some absolutely stunning work done by a team of volunteering developers and maintainers (and volunteers) as we make this plan work on the ground.

And now, some thoughts on the Darkest Timeline.

whyyyyyy_jscott

I’m both an incredibly pessimistic and optimistic person. Some people might use the term “pragmatic” or something less charitable.

Regardless, I long ago gave up assumptions that everything was going to work out OK. It has not worked out OK in a lot of things, and there’s a lot of broken and lost things in the world. There’s the pessimism. The optimism is that I’ve not quite given up hope that something can’t be done about it.

I’ve now dedicated 10% of my life to the Internet Archive, and I’ve dedicated pretty much all of my life to the sorts of ideals that would make me work for the Archive. Among those ideals are free expression, gathering of history, saving of the past, and making it all available to as wide an audience, without limit, as possible. These aren’t just words to me.

Regardless of if one perceives the coming future as one rife with specific threats, I’ve discovered that life is consistently filled with threats, and only vigilance and dedication can break past the fog of possibilities. To that end, the Canadian Backup of the Internet Archive and the IA.BAK projects are clear bright lines of effort to protect against all futures dark and bright. The heritage, information and knowledge within the Internet Archive’s walls are worth protecting at all cost. That’s what drives me and why these two efforts are more than just experiments or configurations of hardware and location.

So, hard drives or cash, your choice. Or both!


In Which I Tell You It’s A Good Idea To Support a Magazine-Scanning Patreon —

So, Mark Trade and I have never talked, once.

All I know about Mark is that due to his efforts, over 200 scans of magazines are up on the Archive.

headlock

These are very good scans, too. The kind of scans that a person looking to find a long-lost article, verify a hard-to-grab fact, or needs to pass along to others a great image would kill to have. 600 dots per inch, excellent contrast, clarity, and the margins cut just right.

cd-rom_today_05_aprmay_1994_0036

So, I could fill this entry with all the nice covers, but covers are kind of easy, to be frank. You put them face down on the scanner, you do a nice big image, and then touch it up a tad. The cover paper and the printing is always super-quality compared to the rest, so it’ll look good:

cd-rom_today_05_aprmay_1994_0000

But the INSIDE stuff… that’s so much harder. Magazines were often bound in a way that put the images RIGHT against the binding and not every magazine did the proper spacing and all of it is very hard to shove into a scanner and not lose some information. I have a lot of well-meaning scans in my life with a lot of information missing.

But these…. these are primo.

pcgames_01_fall_1988_0011

pcgames_01_fall_1988_0012

pcgames_01_fall_1988_0073

When I stumbled on the Patreon, he had three patrons giving him $10 a month. I’d like it to be $500, or $1000. I want this to be his full-time job.

Reading the patreon page’s description of his process shows he’s taking it quite seriously. Steaming glue, removing staples. I’ve gone on record about the pros and cons of destructive scanning, but game magazines are not rare, just entirely unrepresented in scanned items compared to how many people have these things in their past.

I read something like this:

It is extremely unlikely that I will profit from your pledge any time soon. My scanner alone was over $4,000 and the scanning software was $600. Because I’m working with a high volume of high resolution 600 DPI images I purchased several hard drives including a CalDigit T4 20TB RAID array for $2,000. I have also spent several thousand dollars on the magazines themselves, which become more expensive as they become rarer. This is in addition to the cost of my computer, monitor, and other things which go into the creation of these scans. It may sound like I’m rich but really I’m just motivated, working two jobs and pursuing large projects.

…and all I think about is, this guy is doing so much amazing work that so many thousands could be benefiting from, and they should throw a few bucks at him for his time.

My work consists of carefully removing individual pages from magazines with a heat gun or staple-remover so that the entire page may be scanned. Occasionally I will use a stack paper cutter where appropriate and will not involve loss of page content. I will then scan the pages in my large format ADF scanner into 600 DPI uncompressed TIFFs. From there I either upload 300 DPI JPEGs for others to edit and release on various sites or I will edit them myself and store the 600 DPI versions in backup hard disks. I also take photos of magazines still factory-sealed to document their newsstand appearance. I also rip full ISOs of magazine coverdiscs and make scans of coverdisc sleeves on a color-corrected flatbed scanner and upload those to archive.org as well.

This is the sort of thing I can really get behind.

The Internet Archive is scanning stuff, to be sure, but the focus is on books. Magazines are much, much harder to scan – the book scanners in use are just not as easy to use with something bound like magazines are. The work that Mark is doing is stuff that very few others are doing, and to have canonical scans of the advertisements, writing and materials from magazines that used to populate the shelves is vital.

Some time ago, I’ve given all my collection of donated Game-related magazines to the Museum of Art and Digital Entertainment, because I recognized I couldn’t be scanning them anytime soon, and how difficult it was going to be to scan it. It would take some real major labor I couldn’t personally give.

Well, here it is. He’s been at it for a year. I’d like to see that monthly number jump to $100/month, $500/month, or more. People dropping $5/month towards this Patreon would be doing a lot for this particular body of knowledge.

Please consider doing it.

Thanks.


A Simple Explanation: VLC.js —

The previous entry got the attention it needed, and the maintainers of the VLC project connected with both Emularity developers and Emscripten developers and the process has begun.

The best example of where we are is this screenshot:

vlcjs

The upshot of this is that a javascript compiled version of the VLC player now runs, spits out a bunch of status and command line information, and then gets cranky it has no video/audio device to use.

With the Emularity project, this was something like 2-3 months into the project. In this case, it happened in 3 days.

The reasons it took such a short time were multi-fold. First, the VLC maintainers jumped right into it at full-bore. They’ve had to architect VLC for a variety of wide-ranging platforms including OSX, Windows, Android, and even weirdos like OS/2; to have something aimed at “web” is just another place to go. (They’d also made a few web plugins in the past.) Second, the developers of Emularity and Emscripten were right there to answer the tough questions, the weird little bumps and switchbacks.

Finally, everybody has been super-energetic about it – diving into the idea, without getting hung up on factors or features or what may emerge; the same flexibility that coding gives the world means that the final item will be something that can be refined and improved.

So that’s great news. But after the initial request went into a lot of screens, a wave of demands and questions came along, and I thought I’d answer some of them to the best of my abilities, and also make some observations as well.

lunettes

When you suggest something somewhat crazy, especially in the programming or development world, there’s a variant amount of response. And if you end up on Hackernews, Reddit, or a number of other high-traffic locations, those reactions fall into some very predictable areas:

  • This would be great if it happens
  • This is fundamentally terrible, let me talk about why for 4 paragraphs
  • You are talking about making a sword. I am a swordmaker. I have many opinions.
  • My sister was attacked by a C library and I’m going to tell a very long story
  • Oh man, Jason Scott, this guy

So, quickly on some of these:

  • It’s understandable some people will want to throw the whole idea under the bus because the idea of the Web Browser playing a major part in transactions is a theoretical hellscape compared to an ideal infrastructure, but that’s what we have and here we go.
  • I know that it sounds like porting things to Javascript is crazy. I find that people think we’re rewriting things from scratch, instead of using Emscripten, which compiles out to Javascript as a target (and later WebAssembly). We do not write from scratch.
  • Browsers do some of this heavy lifting. It depends on the browser on the platform on the day and they do not talk. If there was a way to include a framework to tell a browser what to do with ‘stuff’ and then it brought both the stuff and the instructions in and did the work, great. Yes, there’s plenty of cases of stuff/instructions (Webpage/HTML, Audio/MP3) that browsers take in, but it’s different everywhere.

But let’s shift over to why I think this is important, and why I chose VLC to interact with.

First, VLC is one of those things that people love, or people wish there was something better than, but VLC is what we have. It’s flexible, it’s been well-maintained, and it has been singularly focused. For a very long time, the goal of the project has been aimed at turning both static files AND streams into something you can see on your machine. And the machine you can see it on is pretty much every machine capable of making audio and video work.

Fundamentally, VLC is a bucket that, when dropped into with a very large variance of sound-oriented or visual-oriented files and containers, will do something with them. DVD ISO files become playable DVDs, including all the features of said DVDs. VCDs become craptastic but playable DVDs. MP3, FLAC, MIDI, all of them fall into VLC and start becoming scrubbing-ready sound experiences. There are quibbles here and there about accuracy of reproduction (especially with older MOD-like formats like S3M or .XM) but these are code, and fixable in code. That VLC doesn’t immediately barf on the rug with the amount of crapola that can be thrown at it is enormous.

And completing this thought, by choosing something like VLC, with its top-down open source condition and universal approach, the “closing of the loop” from VLC being available in all browsers instantly will ideally cause people to find the time to improve and add formats that otherwise wouldn’t experience such advocacy. Images into Apple II floppy disk image? Oscilloscope captures? Morse code evaluation? Slow Scan Television? If those items have a future, it’s probably in VLC and it’s much more likely if the web uses a VLC that just appears in the browser, no fuss or muss.

vlc-media-player-dowload-for-windows

Fundamentally, I think my personal motivations are pretty transparent and clear. I help oversee a petabytes-big pile of data at the Internet Archive. A lot of it is very accessible; even more of it is not, or has to have clever “derivations” pulled out of it for access. You can listen to .FLACs that have been uploaded, for example, because we derive (noted) mp3 versions that go through the web easier. Same for the MPG files that become .mp4s and so on, and so on. A VLC that (optionally) can play off the originals, or which can access formats that currently sit as huge lumps in our archives, will be a fundamental world changer.

Imagine playing DVDs right there, in the browser. Or really old computer formats. Or doing a bunch of simple operations to incoming video and audio to improve it without having to make a pile of slight variations of the originals to stream. VLC.js will do this and do it very well. The millions of files that are currently without any status in the archive will join the millions that do have easy playability. Old or obscure ideas will rejoin the conversation. Forgotten aspects will return. And VLC itself, faced with such a large test sample, will get better at replaying these items in the process.

This is why this is being done. This is why I believe in it so strongly.

besthook

I don’t know what roadblocks or technical decisions the team has ahead of it, but they’re working very hard at it, and some sort of prototype seems imminent. The world with this happening will change slightly when it starts working. But as it refines, and as these secondary aspects begin, it will change even more. VLC will change. Maybe even browsers will change.

Access drives preservation. And that’s what’s driving this.

See you on the noisy and image-filled other side.


A Simple Request: VLC.js —

Almost five years ago to today, I made a simple proposal to the world: Port MAME/MESS to Javascript.

That happened.

I mean, it cost a dozen people hundreds of hours of their lives…. and there were tears, rage, crisis, drama, and broken hearts and feelings… but it did happen, and the elation and the world we live in now is quite amazing, with instantaneous emulated programs in the browser. And it’s gotten boring for people who know about it, except when they haven’t heard about it until now.

By the way: work continues earnestly on what was called JSMESS and is now called The Emularity. We’re doing experiments with putting it in WebAssembly and refining a bunch of UI concerns and generally making it better, faster, cooler with each iteration. Get involved – come to #jsmess on EFNet or contact me with questions.

In celebration of the five years, I’d like to suggest a new project, one of several candidates I’ve weighed but which I think has the best combination of effort to absolute game-changer in the world.

vlc-media-player-dowload-for-windows

Hey, come back!

It is my belief that a Javascript (later WebAssembly) port of VLC, the VideoLan Player, will fundamentally change our relationship to a mass of materials and files out there, ones which are played, viewed, or accessed. Just like we had a lot of software locked away in static formats that required extensive steps to even view or understand, so too do we have formats beyond the “usual” that are also frozen into a multi-step process. Making these instantaneously function in the browser, all browsers, would be a revolution.

A quick glance at the features list of VLC shows how many variant formats it handles, from audio and sound files through to encapsulations like DVD and VCDs. Files that now rest as hunks of ISOs and .ZIP files that could be turned into living, participatory parts of the online conversation. Also, formats like .MOD and .XM (trust me) would live again effectively.

Also, VLC has weathered years and years of existence, and the additional use case for it would help people contribute to it, much like there’s been some improvements in MAME/MESS over time as folks who normally didn’t dip in there added suggestions or feedback to make the project better in pretty obscure realms.

I firmly believe that this project, fundamentally, would change the relationship of audio/video to the web. 

I’ll write more about this in coming months, I’m sure, but if you’re interested, stop by #vlcjs on EFnet, or ping me on twitter at @textfiles, or write to me at vlcjs@textfiles.com with your thoughts and feedback.

See you.

 


The Festival Floppies —

In 2009, Josh Miller was walking through the Timonium Hamboree and Computer Festival in Baltimore, Maryland. Among the booths of equipment, sales, and demonstrations, he found a vendor was selling an old collection of 3.5″ floppy disks for DOS and Windows. He bought it, and kept it.

A few years later, he asked me if I wanted them, and I said sure, and he mailed them to me. They fell into the everlasting Project Pile, and waited for my focus and attention.

They looked like this:

cs6rforueaagbcg

I was particularly interested in the floppies that appeared to be someone’s compilation of DOS and Windows programs in the most straightforward form possible – custom laser-printed directories on the labels, and no obvious theme as to why this shareware existed on them. They looked like this, separated out:

cs6reiouaaaamnd

There were other floppies in the collection, as well:

cswfhv2xeaa4hrl

They’d sat around for a few years while I worked on other things, but the time finally came this week to spend some effort to extract data.

There’s debates on how to do this that are both boring and infuriating, and I’ve ended friendships over them, so let me just say that I used a USB 3.5″ floppy drive (still available for cheap on Amazon; please take advantage of that) and a program called WinImage that will pull out a disk image in the form of a .ima file from the floppy drive. Yes, I could do a flux imaging of these disks, but sorry, that’s incredibly insane overkill. These disks contain files put on there by a person and we want those files, along with the accurate creation dates and the filenames and contents. WinImage does it.

Sometimes, the floppies have some errors and require trying over to get the data off them. Sometimes it takes a LOT of tries. If after a mass of tries I am unable to do a full disk scan into a disk image, I try just mounting it as A: in Windows and pulling the files off – they sometimes are just fine but other parts of the disk are dead. I make this a .ZIP file instead of a .IMA file. This is not preferred, but the data gets off in some form.

Some of them (just a handful) were not even up for this – they’re sitting in a small plastic bin and I’ll try some other methods in the future. The ratio of Imaged-ZIPed-Dead were very good, like 40-3-3.

I dumped most of the imaged files (along with the ZIPs) into this item.

This is a useful item if you, yourself, want to download about 100 disk image files and “do stuff” with them. My estimation is that all of you can be transported from the first floor to the penthouse of a skyscraper with 4 elevator trips. Maybe 3. But there you go, folks. They’re dropped there and waiting for you. Internet Archive even has a link that means “give me everything at once“. It’s actually not that big at all, of course – about 260 megabytes, less than half of a standard CD-ROM.

I could do this all day. It’s really easy. It’s also something most people could do, and I would hope that people sitting on top of 3.5” floppies from DOS or Windows machines would be up for paying the money for that cheap USB drive and something like WinImage and keep making disk images of these, labeling them as best they can.

I think we can do better, though.

The Archive is running the Emularity, which includes a way to run EM-DOSBOX, which can not only play DOS programs but even play Windows 3.11 programs as well.

Therefore, it’s potentially possible for many of these programs, especially ones particularly suited as stand-alone “applications”, to be turned into in-your-browser experiences to try them out. As long as you’re willing to go through them and get them prepped for emulation.

Which I did.

floppo

The Festival Floppies collection is over 500 programs pulled from these floppies that were imaged earlier this week. The only thing they have in common was that they were sitting in a box on a vendor table in Baltimore in 2009, and I thought in a glance they might run and possibly be worth trying out. After I thought this (using a script to present them for consideration), the script did all the work of extracting the files off the original floppy images, putting the programs into an Internet Archive item, and then running a “screen shotgun” I devised with a lot of help a few years back that plays the emulations, takes the “good shots” and makes them part of a slideshow so you can get a rough idea of what you’re looking at.

00_coverscreenshot

You either like the DOS/Windows aesthetic, or you do not. I can’t really argue with you over whatever direction you go – it’s both ugly and brilliant, simple and complex, dated and futuristic. A lot of it depended on the authors and where their sensibilities lay. I will say that once things started moving to Windows, a bunch of things took on a somewhat bland sameness due to the “system calls” for setting up a window, making it clickable, and so on. Sometimes a brave and hearty soul would jazz things up, but they got rarer indeed. On the other hand, we didn’t have 1,000 hobbyist and professional programs re-inventing the wheel, spokes, horse, buggy, stick shift and gumball machine each time, either.

screenshot_05

Just browsing over the images, you probably can see cases where someone put real work into the whole endeavor – if they seem to be nicely arranged words, or have a particular flair with the graphics, you might be able to figure which ones have the actual programming flow and be useful as well. Maybe not a direct indicator, but certainly a flag. It depends on how much you want to crate-dig through these things.

Let’s keep going.

Using a “word cloud” script that showed up as part of an open source package, I rewrote it into something I call a “DOS Cloud”. It goes through these archives of shareware, finds all the textfiles in the .ZIP that came along for the ride (think README.TXT, READ.ME, FILEID.DIZ and so on) and then runs to see what the most frequent one and two word phrases are. This ends up being super informative, or not informative at all, but it’s automatic, and I like automatic. Some examples:

Mad Painterpaint, mad, painter, truck, joystick, drive, collision, press, cloud, recieve, mad painter, dos prompt

Screamer screamer, code, key, screen, program, press, command, memory, installed,activate, code key, memory resident, correct code, key combination, desired code

D4W20timberline, version, game, sinking, destroyer, gaming, smarter, software,popularity, timberline software, windows version, smarter computer, online help, high score

Certainly in the last case, those words are much more informative than the name D4W20 (which actually stands for “Destroyer for Windows Version 2.0”), and so the machine won the day. I’ve called this “bored intern” level before and I’d say it’s still true – the intern may be bored, but they never stop doing the process, either. I’m sure there’s some nascent class discussion here, but I’ll say that I don’t entirely think this is work for human beings anymore. It’s just more and more algorithms at this point. Reviews and contextual summaries not discernible from analysis of graphics and text are human work.

For now.

screenshot_00

These programs! There are a lot of them, and a good percentage solve problems we don’t have anymore or use entire other methods to deal with the information. Single-use programs to deal with Y2K issues, view process tasks better, configure your modem, add a DOS interface, or track a pregnancy. Utilities to put the stardate in the task bar, applications around coloring letters, and so it goes. I think the screenshots help make decisions, if you’re one of the people idly browsing these sets and have no personal connection to DOS or Windows 3.1 as a lived experience.

I and others will no doubt write more and more complicated methods for extracting or providing metadata for these items, and work I’m doing in other realms goes along with this nicely. At some point, the entries for each program will have a complication and depth that rivals most anything written about the subjects at the time, when they were the state of the art in computing experience. I know that time is coming, and it will be near-automatic (or heavily machine-assisted) and it will allow these legions of nearly-lost programs to live again as easily as a few mouse clicks.

But then what?

screenshot_03

But Then What is rapidly becoming the greatest percentage of my consideration and thought, far beyond the relatively tiny hurdles we now face in terms of emulation and presentation. It’s just math now with a lot of what’s left (making things look/work better on phones, speeding up the browser interactions, adding support for disk swapping or printer output or other aspects of what made a computer experience lasting to its original users). Math, while difficult, has a way of outing its problems over time. Energy yields results. Processing yields processing.

No, I want to know what’s going to happen beyond this situation, when the phones and browsers can play old everything pretty accurately, enough that you’d “get it” to any reasonable degree playing around with it.

Where do we go from there? What’s going to happen now? This is where I’m kind of floating these days, and there are ridiculously scant answers. It becomes very “journey of the mind” as you shake the trees and only nuts come out.

To be sure, there’s a sliver of interest in what could be called “old games” or “retrogaming” or “remixes/reissues” and so on. It’s pretty much only games, it’s pretty much roughly 100 titles, and it’s stuff that has seeped enough into pop culture or whose parent companies still make enough bank that a profit motive serves to ensure the “IP” will continue to thrive, in some way.

The Gold Fucking Standard is Nintendo, who have successfully moved into such a radical space of “protecting their IP” that they’ve really successfully started moving into wrecking some of the past – people who make “fan remixes” might be up for debate as to whether they should do something with old Nintendo stuff, but laying out threats for people recording how they experienced the games, and for any recording of the games for any purpose… and sending legal threats at anyone and everyone even slightly referencing their old stuff, as a core function.. well, I’m just saying perhaps ol’ Nintendo isn’t doing itself any favors but on the other hand they can apparently be the most history-distorting dicks in this space quadrant and the new games still have people buy them in boatloads. So let’s just set aside the Gold Fucking Standard for a bit when discussing this situation. Nobody even comes close.

There’s other companies sort of taking this hard-line approach: “Atari”, Sega, Capcom, Blizzard… but again, these are game companies markedly defending specific games that in many cases they end up making money on. In some situations, it’s only one or two games they care about and I’m not entirely convinced they even remember they made some of the others. They certainly don’t issue gameplay video takedowns and on the whole, historic overview of the companies thrives in the world.

But what a small keyhole of software history these games are! There’s entire other experiences related to software that are both available, and perhaps even of interest to someone who never saw this stuff the first time around. But that’s kind of an educated guess on my part. I could be entirely wrong on this. I’d like to find out!

Pursuing this line of thought has sent me hurtling into What are even musuems and what are even public spaces and all sorts of more general questions that I have extracted various answers for and which it turns out are kind of turmoil-y. It also has informed me that nobody kind of completely knows but holy shit do people without managerial authority have ideas about it. Reeling it over to the online experience of this offline debated environment just solves some problems (10,000 people look at something with the same closeness and all the time in the world to regard it) and adds others (roving packs of shitty consultant companies doing rough searches on a pocket list of “protected materials” and then sending out form letters towards anything that even roughly matches it, and calling it a ($800) day).

Luckily, I happen to work for an institution that is big on experiments and giving me a laughably long leash, and so the experiment of instant online emulated computer experience lives in a real way and can allow millions of people (it’s been millions, believe it or not) to instantly experience those digital historical items every second of every day.

So even though I don’t have the answers, at all, I am happy that the unanswered portions of the Big Questions haven’t stopped people from deriving a little joy, a little wonder, a little connection to this realm of human creation.

That’s not bad.

screenshot_00-1


Why the Apple II ProDOS 2.4 Release is the OS News of the Year —

prodos-2-4-splash

In September of 2016, a talented programmer released his own cooked update to a major company’s legacy operating system, purely because it needed to be done. A raft of new features, wrap-in programs, and bugfixes were included in this release, which I stress was done as a hobby project.

The project is understatement itself, simply called Prodos 2.4. It updates ProDOS, the last version of which, 2.0.3, was released in 1993.

You can download it, or boot it in an emulator on the webpage, here.

As an update unto itself, this item is a wonder – compatibility has been repaired for the entire Apple II line, from the first Apple II through to the Apple IIgs, as well as cases of various versions of 6502 CPUs (like the 65C02) or cases where newer cards have been installed in the Apple IIs for USB-connected/emulated drives. Important utilities related to disk transfer, disk inspection, and program selection have joined the image. The footprint is smaller, and it runs faster than its predecessor (a wonder in any case of OS upgrades).

The entire list of improvements, additions and fixes is on the Internet Archive page I put up.

prodos-2-4-bitsy-boot

The reason I call this the most important operating system update of the year is multi-fold.

First, the pure unique experience of a 23-year-gap between upgrades means that you can see a rare example of what happens when a computer environment just sits tight for decades, with many eyes on it and many notes about how the experience can be improved, followed by someone driven enough to go through methodically and implement all those requests. The inclusion of the utilities on the disk means we also have the benefit of all the after-market improvements in functionality that the continuing users of the environment needed, all time-tested, and all wrapped in without disturbing the size of the operating system programs itself. It’s like a gold-star hall of fame of Apple II utilities packed into the OS they were inspired by.

This choreographed waltz of new and old is unique in itself.

Next is that this is an operating system upgrade free of commercial and marketing constraints and drives. Compared with, say, an iOS upgrade that trumpets the addition of a search function or blares out a proud announcement that they broke maps because Google kissed another boy at recess. Or Windows 10, the 1968 Democratic Convention Riot of Operating Systems, which was designed from the ground up to be compatible with a variety of mobile/tablet products that are on the way out, and which were shoved down the throats of current users with a cajoling, insulting methodology with misleading opt-out routes and freakier and freakier fake-countdowns.

The current mainstream OS environment is, frankly, horrifying, and to see a pure note, a trumpet of clear-minded attention to efficiency, functionality and improvement, stands in testament to the fact that it is still possible to achieve this, albeit a smaller, slower-moving target. Either way, it’s an inspiration.

prodos-2-4-bitsy-bye

Last of all, this upgrade is a valentine not just to the community who makes use of this platform, but to the ideas of hacker improvement calling back decades before 1993. The amount of people this upgrade benefits is relatively small in the world – the number of folks still using Apple IIs is tiny enough that nearly everybody doing so either knows each other, or knows someone who knows everyone else. It is not a route to fame, or a resume point to get snapped up by a start-up, or a game of one-upsmanship shoddily slapped together to prove a point or drop a “beta” onto the end as a fig leaf against what could best be called a lab experiment gone off in the fridge. It is done for the sake of what it is – a tool that has been polished and made anew, so the near-extinct audience for it works to the best of their ability with a machine that, itself, is thought of as the last mass-marketed computer designed by a single individual.

That’s a very special day indeed, and I doubt the remainder of 2016 will top it, any more than I think the first 9 months have.

Thanks to John Brooks for the inspiration this release provides. 


Who’s Going to be the Hip Hop Hero —

People often ask me if there’s a way they can help. I think I have something.

So, the Internet Archive has had a wild hit on its hand with the Hip Hop Mixtapes collection, which I’ve been culling from multiple sources and then shoving into the Archive’s drives through a series of increasingly complicated scripts. When I run my set of scripts, they do a good job of yanking the latest and greatest from a selection of sources, doing all the cleanup work, verifying the new mixtapes aren’t already in the collection, and then uploading them. From there, the Archive’s processes do the work, and then we have ourselves the latest tapes available to the world.

Since I see some of these tapes get thousands of listens within hours of being added, I know this is something people want. So, it’s a success all around.

mixtape

With success, of course, comes the two flipside factors: My own interest in seeing the collection improved and expanded, and the complaints from people who know about this subject finding shortcomings in every little thing.

There is a grand complaint that this collection currently focuses on mixtapes from 2000 onwards (and really, 2005 onwards). Guilty. That’s what’s easiest to find. Let’s set that one aside for a moment, as I’ve got several endeavors to improve that.

What I need help with is that there are a mass of mixtapes that quickly fell off the radar in terms of being easily downloadable and I need someone to spend time grabbing them for the collection.

While impressive, the 8,000 tapes up on the archive are actually the ones that were able to be grabbed by scripts, without any hangups, like the tapes falling out of favor or the sites they were offering going down. If you use the global list I have, the total amount of tapes could be as high as 20,000.

Again, it’s a shame that a lot of pre-2000 mixtapes haven’t yet fallen into my lap, but it’s really a shame that mixtapes that existed, were uploaded to the internet, and were readily available just a couple years ago, have faded down into obscurity. I’d like someone (or a coordinated group of someones) help grab those disparate and at-risk mixtapes to get into the collection.

I have information on all these missing tapes – the song titles, the artist information, and even information on mp3 size and what was in the original package. I’ve gone out there and tried to do this work, and I can do it, but it’s not a good use of my time – I have a lot of things I have to do and dedicating my efforts in this particular direction means a lot of other items will suffer.

So I’m reaching out to you. Hit me up at mixtapes@textfiles.com and help me build a set of people who are grabbing this body of work before it falls into darkness.

Thanks.