A lot of people have taken up sourdough bread making during the pandemic. My wife is one of those. She’s gotten pretty good at it. Behold her latest creation.
A lot of people have taken up sourdough bread making during the pandemic. My wife is one of those. She’s gotten pretty good at it. Behold her latest creation.
The open source outliner, Zavala, that I have been building now has a website and is available for beta testing. Check it out. zavala.vincode.io
I thought I’d let people know what I’ve been working on lately. I’ve been keeping busy!
My main project that I’ve been working on for the last couple years is NetNewsWire. This project is amazing to work on. The community that Brent Simmons has created is great. The popularity of NetNewsWire is great too. It really helps get you motivated when so many people benefit from and appreciate your work.
For the 6.0 release cycle I did the Big Sur UI refresh, iCloud syncing, and the Reddit and Twitter extensions. I also finished up the work that another developer had started on the Reader API code. That got us BazQux, The Old Reader, Inoreader, and FreshRSS syncing. I did some other small things for 6.0 too, like the Share Extension.
Lots of other developers contributed to 6.0 including in big ways like implementing NewsBlur syncing and other UI changes. NetNewsWire 6 is a big deal for the team. I hope you check it out and give it a try.
This project has been on hold for about 6 months. I’d still like to tackle the feed discovery problem. For those not familiar with Feed Spider, I was working on creating a searchable blog directory. Look through my back posts in this blog for details.
I still want to tackle this again. I paused my work on it to learn more about Machine Learning, but then I got distracted with building an outliner application. I’ll probably work on this more in the coming year as a way to mix up my development work and not get bored.
I built an outliner application called Zavala. This is a fun project. I’m a bit of an outliner novice and didn’t use one on a regular basis. Now I use one constantly and for all kinds of different things. Organizing my thoughts, planning and managing projects, and even just as a TODO manager.
I just released the first Alpha builds of Zavala. That means that from here until the release, I’m only fixing bugs. No new features are planned for the 1.0 release. This shouldn’t take up much time and I’ll probably work on some other things in the meantime.
I’m not sure what’s next. There are some things I would like to see in NetNewsWire 6.1, so there will be some work done there for sure. I’ll work on small features for Zavala to fill in the time. Maybe I’ll even get to spend some time on Feed Spider.
And we can’t forget that WWDC is coming. After Apple releases its updated frameworks, there is always updating that has to be done to existing applications. That often takes up a good chunk of an Apple developer’s summer. Maybe by Fall things will be slowing down for me. Then again, I’ll probably just find something new and shiny to start chasing.
My wife’s best friend is #1 & #2 in Amazon’s Women in Politics category! She’s beating out many political big names. https://www.amazon.com/gp/new-releases/books/5571264011/ref=zg_b_hnr_5571264011_1
Last February, Nicole and I were wintering in Arizona. We were down there living out of the van, like we normally try to do in the winter. We were fairly close to my Uncle Terry had a winter home in Yuma, AZ, so we went to visit him and stayed just across the border in California on some BLM land.
It was pretty much just desert there, but that didn’t matter. We were going to be spending our days with Terry and his partner Irene. They took us to see the Yuma Territorial Prison and went out to eat at all their favorite spots. We had a blast visiting with them and catching up.
Uncle Terry died today from complications from COVID-19. He was older, but in otherwise good health until the virus. When last I saw him he was just as sharp as I’d ever seen him mentally. He was taken before his time was up.
I’m going to miss that man. We didn’t spend enough time together over the years and I regret that. Terry was a great story teller and always so much fun to be around. There was always an abundance of laughter when Terry got on telling stories. He also loved cars and collected them, so he and I never ran out of things to talk about.
My family is heartbroken. We’re heartbroken and also angry. It didn’t have to be like this.
I was reading a blog post about firetrucks and it reminded me of a #vanlife post I never got around to writing.
In late January, 2019 we were traveling around the U.S. and ended up on Padre Island. Padre Island is just off the coast of Texas. You can drive to the island over a bridge and can camp for free most anywhere you want to along the beach. Even in the winter it is warm there and quite beautiful if you like the ocean.
As beautiful as it is, there isn’t much to do except watch the ocean. So a couple times a day, I would walk up and down the shore line, and sometimes see some wildlife.
People were well spread out along the beach. On my walks I got to meet lots of interesting people with all different kinds of mobile living arrangements. By far the most interesting was the firetruck conversion.
I’d walked by this contraption several times before I caught the owner outside showing it off for someone and could discuss it with him. He was a former boat builder that had a fascination with firetrucks.
The rig is part firetruck, part boat, and part tow truck. You can clearly see the boat part of the vehicle that has been added to the firetruck. There is also ramp (extended in the picture) that can load and carry a car.
I’ve seen a lot of unique builds while traveling and this one is definitely the most unique.
I’ve been doing a lot of reading and a lot of soul searching lately. Having to dig deep into Machine Learning wasn’t on my 2020 list of things to do. I’d really planned to spend this year improving my Apple platforms developer skills. Learning Python and a bunch of new concepts is a real detour for me.
To better understand if the Machine Learning was something I wanted go ahead with, I did some research on how much education you needed to get into it. It turns out there is quite a bit to it, but there is also a good deal of overlap with my existing skillset. For example, my business programming career left me with lots of skill manipulating and cleansing large amounts of data programmatically. That and being able to program at all are good starting points.
The other big prerequisites are math. Specifically linear algebra, statistics, and probabilities. I used to know how to do that stuff, but that was 30 years ago. The good news is that the Khan Academy has courses that I can take for refreshers. All this Machine Learning stuff is within my reach.
I’ve decided to go ahead and get proficient with Machine Learning. It is a skill not too far out of my grasp. Besides, I need something to do.
Coursera has a Machine Learning course that they are giving away during the pandemic. Coursera looks like a great way to get your credit card charged for classes you haven’t taken or signed up for. At least that is what the online reviews say. Needless to say, they won’t be getting a credit card number from me. I am going to take the free class. I’m dubious as to how good it will be, but as long as it isn’t giving my outright misinformation, I think I’ll be ok.
After that, I’ll be taking the Khan Academy classes to get my math back up to par. In the mean time, I’m following the Towards Data Science blog. They put out lots of good material and the more I read of it the more I’m beginning to understand.
All of this will take some time. If I make any progress towards Feed Spider, I’ll blog about it. Don’t expect much for a while though. 🤓
I made two changes in my latest run. I probably should only make one change at a time to be able to narrow down what is helping and what is setting me back. Still, I went ahead and moved down one more level for the targeted categories. This gave me a lot more categories that would come up. My second change was that I only selected the categories that were closest in relationship to the article category. The net effect of this I estimate to be, more categories and fewer category labels per article.
I ran the processes needed up to the supervised training. That I started and took a 4 hour break while it ran. I should have waited around and checked the ETA for completion before taking my break. When I got back to it, there were still 11 hours of training remaining.
I wouldn’t consider it a big deal just to go and do something else for 11 hours, except that the supervised training was using 100% of all my CPU’s on my work computer. I’m impressed with how responsive macOS stays while under that kind of load. If all I wanted to do was some light work, I could just let it run. What I really wanted to do was do bugfixing on NetNewsWire while it ran. Compile times are just too frustratingly long while the rest of the computer is maxed out.
I figured it was finally time to spin up an on demand Amazon instance. After doing some superficial research, I decided to give Amazon’s new ARM CPU’s a go. They have the best price to performance ratio and since everything I’m doing is open source, I can compile it to ARM just fine.
The first machine I picked out was a 16 CPU instance. I got everything set up and started the supervised training. It was going to take 10 hours. Not good enough, so I detached the volume associated with the instance so that I wouldn’t have to set up and compile everything again. I attached the volume to a 64 CPU instance and tried again. 10 hours to run. I checked and was only getting 1200% CPU utilization.
I’d assumed that fastText was scaling itself based on available CPU’s since it was sized perfectly for my MacBook. I have 12 logical processors and it was maxing them out. It turns out that you have to pass a command line parameter to fastText to set the number of threads for it to use. 12 is the default and by coincidence matched my MacBook perfectly.
I restarted again using 64 threads this time, expecting great things. Instead I got NaN exceptions from fastText. Rather than dig through the fastText code to find the problem, I took a shot in the dark and started with 48 threads. That worked and had an ETA of 3 hours. A 48 CPU instance is the next step down for Amazon’s new ARM CPU instances, so that is where I’ve settled in for my on demand instance.
As a sidebar, I would like to point out that this is a pretty good deal for me. The 48 CPU instance is $1.85/per hour to run. I’m not sure how to compare apple-to-apples with a workstation, but to get to a 40 thread CPU workstation at Dell is around $5k. Since, I’m primarily an Apple platforms developer, I wouldn’t have any use for it besides doing machine learning. It would mostly be sitting idle and depreciating in value. I would have to more than 2500 hours of processing to get ahead by buying a workstation. That’s assuming that a 5k workstation is as fast as the 48 CPU on demand instance, which I doubt it is.
After 3 hours, the model came out 70% accurate against the validation file. That’s pretty good, but what about in the real world? Pretty shitty still. Here is One Foot Tsunami again.
The new model simply doesn’t find any suggestions lots of time. See the “Pizza Arbitrage” above. The categories that it does find, kind of make sense? The categories are trash for categorizing a blog though.
One of my assumptions when starting this project was that Wikipedia’s categories would be useful for doing supervised training. I really don’t know if that is the case. How things are categorized in Wikipedia is chaotic and subjective. You can tell that just from browsing them yourself on the web. My hopes that machine learning would find useful patterns in them is mostly gone.
It is time for a change in direction. Up until this point, I had hoped to get by with a superficial understanding of Natural Language Processing and data science. I can see know that won’t be enough. I’m going to have to dig in and understand the tools I am playing with better as well as think about finding a new way of getting a trained model to categorize blogs.
It took around 40 hours to expand and extract all of Wikipedia using WikiExtractor. In the end, I ended up with 5.6 million articles extracted. Wikipedia has 6 million articles, so WikiExtractor tossed out 400k of those. Possibly due to template recursion errors. That was something that WikiExtractor would occasionally complain about as it was working.
My next step was to fix that slow query that is used to roll up categories. I had no idea what I was going to do about it given the complexity of the query and amount of data that it was processing. Still, I thought I better do my due diligence and run an EXPLAIN against the query to tune it as much as I could.
I was surprised to see that the query was doing a full sequential scan of the relationship table. I thought that I had indexed its columns, but hadn’t. I only needed to have an index for one side of the relationship table, so I added it. I reran the query and it now consistently came back within 10’s of milliseconds as opposed to multiple seconds. This was a massive improvement.
Another change I made was that I went down another level in categories from the main content category. This netted about 10,000 categories that we would roll up into, versus the hundreds we had before. My hope was that this level would provide more useful categories for blogs.
I had to rewrite the Article Extractor now that it wasn’t going to be processing raw Wikipedia data any longer. Now it would be reading the JSON files generated by WikiExtractor. This would be much faster, especially since I got the roll up query fixed. Last time I ran the Article Extractor, it took all night long to extract only 68,000 records. This time I ran it and processed 5.6 million records in less than 2 hours. 💥
I was excited at this point and ran that output through the Article Cleaner to prepare it for training by fastText. That process is quick and only takes about ½ hour to run. Now for fastText training. I ran it with the same parameters as last time, just this time with a much, much larger dataset. fastText helpfully provides an ETA for completion. It was 4 hours, so I went to relax and have dinner.
After the model was built, I validated it and this time it only came out with 60% accuracy. That was a disappointment considering that it was 80% last time. Forging ahead, I ran the new model against a couple blogs. Testing against technology blogs gave varying and disappointing results.
The results for One Foot Tsunami are now more specific and more accurate. They still aren’t very useful. I decided I would try a simpler blog, a recipe blog, to see if that would improve results. This is the results for “Serious Eats: Recipes”.
At least it picked categories with “food” in the name a couple times. Still the accuracy is off and the categories not helpful. I need something that people would be looking for when trying to find a cooking or recipes blog.
I’m feeling pretty discouraged at this point. I think a part of me thought that throwing huge amounts of data at the problem would net much better results than I got. I have learned some things lately that I can try to improve the quality of the data. I’m not out of options and am far from giving up.
I think the next thing I will try though, is going down one more level in categories. Maybe the categories will get more useful. Maybe the accuracy will increase. Maybe it will get worse. I won’t know until I try.
I’ve got something strange that happens with NetNewsWire’s Cloudkit integration. I consider the code stable at this point. I’ve been running it for weeks across 3 different devices and they never go out of sync.
My problem is that the CloudKit operations seem to pause for extended periods of time. This could be an hour, but then it will just break loose and start working again. Restarting the app also clears the problem up. I’d suspect a deadlock of some kind, but it will start back up again without intervention.
What is strange about this is that it only happens on macOS. iOS this never happens on. It seems to be worse if my system is under load or if I’ve left it NNW running for an extended period of time. It happens for both fetch and modify operations. I’m at a loss as to if this is a test environment issue, something with my machine, or a coding problem. Any one ever seen anything like this before?
My first run at classifying blogs ended predictably bad. Not horribly bad, I guess. If you squint really hard, you could see that some of the categories kind of make some sense. They just were generally not useful due to category vagueness. The categories that were found were things like “culture” or “humanities” which could be almost anything. Things are going to have to get more specific and more accurate.
One of the things I noticed when I was validating the categories and their relationships that I extracted was that some were missing. It turns out that Wikipedia will use templates sometimes for categories. A Wikipedia template is server-side include, if you know what that is. Basically is a way to put one page inside another page. I didn’t have a way to include template contents in a page while I was parsing it and was missing categories because of it.
I’ve started reading fastText Quick Start Guide and am about a ¼ of the way through it. I haven’t learned much about NLP, but I have gotten more tools to play with now. One of these is another Wikipedia extraction utility, WikiExtractor and it handles templates!
Something I always do when looking at a new Github project is check out open issues and pull requests. It tells you a lot about how well maintained the project is. One open pull request for WikiExtractor is “Extract page categories”. I’m glad I saw that pull request, because I didn’t know that it didn’t extract categories. Also, I now had the code to extract those categories. I grabbed the pull request version and got to work.
I did a couple test runs and realized that although I was getting full template expanded articles with categories, I wasn’t getting any category pages. The category pages are how I build the relationships between categories. WikiExtractor is about 3000 lines of Python, all in one file. After a couple hours of reading code, I was familiar enough with the program to modify it to only extract category pages and bypass article pages. I’ll extract the article pages later.
I wrote a new Category Extractor that took input from WikiExtractor and reloaded my categories database. Success! I now had the missing categories. Before, I had about 1 million categories. Now I have 1.8 million. Due to this change and fixing some other bugs, my category relationship count went up from 550,000 entries to 3.1 million. This is a lot higher fidelity information than I had loaded before.
The larger database makes a problem I had earlier even harder now. How to roll up categories into their higher level categories. This was a poor performer before and now that I will be extracting articles again and assigning them categories, I’m going to have to make it go faster. It ran so slow that I only had 68,000 articles to train my model with and I want to use a lot more than that next time.
That’s the next thing to work on. In the meantime, I’m running WikiExtractor against the full Wikipedia dump to give me template expanded articles. This is running much slower than when I just extracted the category pages and may take a couple days to complete. My poor laptop. If I have to extract those articles more than once using WikiExtractor, I’m going to set up a large Amazon On-Demand instance to run it on. Hopefully, it won’t come to that.
I put a test harness around the prediction engine for fastText. The test harness downloads and cleans an RSS feed and asks for the most likely classification. Here are some results from One Foot Tsunami:
Each row is an article title from the feed followed by the classification derived from the article content. I’m both encouraged and strangely disappointed at the same time. Things seem to be working, but clearly I need to do some work on what my categories are.
Initially, I tried combining all the articles in the feed and running that through the prediction engine. It always gave “chronology” back as the classification. Individual articles seem to give better results. I’ll probably end up classifying by article and taking the most common classifications as the feed’s.
I think “chronology” might be the default classification in the model. I see it come up a lot. Looking at the Wikipedia page for Category:Chronology has me thinking anything with a date in it will roll up to it. It looks like there will be trouble maker categories that I have to delete from the database, like “chronology”. I’ve already eliminated the ones with the word “by” in them. These were things like “Birds by state” which would clearly better be described by another classification.
I think I’ll probably fall into a cycle of tweaking the categories and then running the rest of the flow to see how well the predictions improve. That means making that slow category roll-up query run faster. I think I have my work cut out for me tomorrow.
Yesterday, I had just gotten the categories and category relationships loaded into the relational database and identified the categories I want to use for blogs. The next step was rolling up all the hundreds of thousands of categories into those roughly 1300 categories.
I came up with a query that I thought would work. This isn’t easy because Wikipedia’s categories aren’t strictly hierarchal. They kind of are, but it is really more of a graph than a hierarchy. What I mean by this is that a specific article can have multiple top level categories and there are many paths to get there. You can see this if you click on one of the categories at the bottom of a Wikipedia article. It will take you to a page about that category and at the bottom of that page is more categories that this one belongs to. Since there are more than one, the path to the top isn’t obvious and is plural.
One pitfall to walking a graph like this is getting stuck in a loop. For example category A points to category B, which points to category C, which points to category A. Another problem is getting back to too many top level categories. Finally, you have to deal with the sheer number of paths that can be taken. Say the category we are looking for has 5 parent categories and those have 5 each and those have 5 each. That’s 125 paths to search after only going up 3 levels.
In the end I put code in to limit recursion, limited to 5 resulting categories, and only 4 levels of searching upwards. The query still takes about 700ms to run which is very slow. That is not good.
When building a complex system it is important to address architectural risk as early as possible. I’m sure you have heard stories of projects getting cancelled after spending months or even years of development. Lots of times this is because a critical piece of the architecture proved unviable late in the project. Tragically a lot of work has usually been done that relied on that piece before discovering that it won’t work.
The biggest piece of architectural risk in this system is the machine learning parts. We want to get to that as quickly as possible so that we don’t end up doing work that may end up being thrown away. So I decided to move on to building the Article Extractor instead of optimizing the query or even trying to make it more accurate. We can come back to it later after risk has been addressed.
The Article Extractor being very similar to the Category Extractor didn’t take long to code and test. Its job is to read in the Wikipedia dumps for an article, assign the roll-up categories, and write it out to a file. Since it relied on the slow query for part of its logic, I knew it would be slow. So I fired up 9 instances of the Article Extractor and let it run for about 14 hours.
When I finally checked the output of the Article Extractor it had produced only 68,000 records. That isn’t very much, but should be enough for us to move on to the next step. We can go back and generate more data later if this doesn’t prove sufficient.
The next step is to prepare fastText to do some blog classifications by training the model. I don’t know much of anything about fastText yet. I’ve bought a book on it, but haven’t read it. To keep things moving along, I adapted their tutorial to work with the data produced thus far.
I wrote the Article Cleaner, see flowchart, as a shell script. It combines the multiple output files from the Article Extractor processes, runs it through some data normalization routines, and spits the result 80/20 into two separate files. The bigger file is used to train the model and the smaller to validate it.
Supervised training came next. I fed fastText some parameters that I don’t understand fully, but come from the tutorial, and validated the model. I was quite shocked when the whole thing ran in under 3 minutes. The fastText developers weren’t false when naming the project.
The numbers are hard to read, but what they are saying is that we are 88% accurate at predicting the correct categories for the Wikipedia articles fed to it for verification. In the tutorial, they only get their model up to 60% accurate, so I’m calling this good for now. Almost assuredly, our larger input dataset made us more accurate than the tutorial. Eventually, I’ll do some reading and hopefully get that number even higher.
Now, it is almost time for the rubber to hit the road. Next step is to begin feeding blogs to the prediction engine and see what comes out. At this point, I’m not too concerned that about the machine learning part working. I’m mostly concerned that the categories that we’ve selected won’t work well for blogs. I guess I’m about to find out.
Wow, the Wikipedia category data I loaded was bad. Really bad. I guess that should be expected considering that was the first run of the Category Extractor. Still, I expected better.
I’ve decided to use the same category classifications as Wikipedia does for determining the main topics. The “Main topic classifications” page lists out the top level categories. From there on down, there are subcategory after subcategory of classifications.
For example, if you drill down into “Academic disciplines”, you get a listing if its categories.
I should have been able to query my database after loading it and see that under “Main topic classifications” where all its subcategories. About half were missing. When dealing with 16GB of compressed text, where do you even start? I could see that I was missing the “Mathematics” subcategory, but had no idea where in that pile of 16GB to look.
Eventually, I wrote a program to extract the “Mathematics” category page that had the relationships in it that I was looking for. Then I was able to test with a single page instead of millions to find some bugs. As was typical for me, my logic was sound, but I had made a mistake in the details. I’d gotten in a hurry and made a cut and paste error when assigning a field to the database and was inserting the wrong one. A little desk checking might have saved me half a day or so of debugging.
I fixed my bug and started up the process again. Since I’d added another constraint on the database to improve the quality of the category relationships, the process was running slower than it had before. It was now taking about 2 or 3 hours to run. I never stick around to time it, so I fired it off and went to bed.
This morning I got up early and checked the data. It looks good now!
I’m able to recreate the category relationships in the Wikipedia pages. Nothing is missing! What you see in the query in the above screenshot is grabbing all the subcategories for “Main topic classifications” and their subtopics. This yields 1351 topics. I think that is a reasonable amount of labels to train the model with. At least that is a good starting point to see how it will shake out.
I’m envisioning having a way for users to select “Academic disciplines” and then choose from the resulting list, “Biblical studies” and then see a listing of blogs that fall into that category. They should also be able to search for “biblical” and get the same thing. Possibly we could even get to the point that searching for “bible” turns up the correct blogs using word vectors.
Now that I can go down from “Main topic classifications” to the categories I want identified, I have to go the other way. If a page has the category “Novelistic portrayals of Jesus”, I need to be able to roll that up to “Biblical studies”. After that gets figured, I can begin extracting articles for the training model.
I’ve been analyzing the database I created for Feed Spider that models the Wikipedia categories in a relational database. I’m using PostgreSQL for the database and wanted something nicer than the command line to execute my queries.
I’ve only been using it a couple days, but it has made my life much nicer while working with both DDL (data definition language) and DML (data manipulation language). It was an easy decision to buy it. It’s only $40 if you get it from their site or $50 through the Mac App Store.
Some days I can’t but help to step back and marvel about where we are technologically in the developer world. I used to work at companies that paid big money for Oracle or DB2 with their shitty Java based developer frontends. Now I can have a high powered database for free, with a real AppKit front end for less than the price of a night out with my wife. Good times.
I’ve been working to get Feed Spider development started and it has started. One of the challenging things about starting a new project is getting the development environment set up. More than once, I’ve seen this lead to analysis-paralysis on projects. There is a strong urge to get things planned and set up correctly to get the project off to a good start.
I’m not completely immune to this, even if I know it is a danger. I spent about a day working with Docker, which I have next to no knowledge of, to come up with a portable dev environment. My thinking was that with Docker I could set up developers with a full environment, including dependencies, with little effort. In my head, Docker would handle the PostgreSQL database, Python, Python dependencies, compiling fastText, etc…
I realized that this was beyond my knowledge of Docker and Docker Composer and that I would have to get an education in these before I even got started. Struggling with this felt too much like I was getting paralyzed and was working on the wrong things. So I put that to the side. It is something I can add later. In the meantime, I got PostgreSQL installed, fastText compiled, and Python up and going with all my dependencies.
I’m new to Python, but am picking it up quickly. I was able to write the Category Extractor that scans all of Wikipedia for the categories assigned to the articles. It also understands the relationship between those categories so that they can be rolled up. Python has some very useful libraries for parsing Wikipedia and tutorials on how to do it. It all came together a lot faster than I thought it would.
The actual processing time to extract the categories and load them to a database was faster than I thought it would be too. Compressed, all the articles in Wikipedia are about 16GB. Uncompressed, they are supposed to be over 50GB. There are 60 compressed files that you can download. I processed them individually, but 8 at a time. It took less than an hour to go through all of Wikipedia on my 2018 MacBook Pro.
I ended up finding over 1 million categories and over 550,000 category relationships. That sounds about right considering there are 6 million articles and articles have multiple categories.
Now I just have to make some sense of all that data. The next couple days will be important to see if that is possible. If I can’t figure out how to roll up and narrow down those categories, I’ll have to figure out another way to train my Text Classification model.
In the first post about Feed Spider we discussed the motivation behind creating a feed directory. We also discussed some software components that can be used to create Feed Spider. Now we’re going to try to tie that all together.
The architecture and design of Feed Spider is at the inception phase. Writing this blog post is an exercise in helping me better understand the problem space as much as it is for communicating what I am trying to do. As they say, no plan survives first contact with the enemy. This plan is no different and I expect it to be iterated over and refined as more gets learned and implemented.
Feedback and criticisms are welcome. Changing a naive approach is easier the sooner it is caught.
Pictures always help and sometimes the old ways are best.
The first problem that we run into with categorizing RSS feeds is what categories to put them into. What are those categories? Wikipedia supplies us with categories associated with an article. The problem is that these categories are hierarchical. Categories can have categories. There are also multiple categories assigned to an article. There are probably thousands of categories in Wikipedia. We will need to roll up the category hierarchy and reduce the number of categories used.
To do so, we will extract the categories used in Wikipedia and load them into a relational database using a new process called Category Extractor. We can then do some data analysis using SQL to get a rough idea of the what the top level categories are and how many articles are under them. Once we understand the data better, we should be able to come up with a criteria for tagging high value categories. A process called Category Valuator will be run against the database to identify and tag the high value categories we want to extract articles for.
The Article Extractor process will scan Wikipedia for articles that have the categories that we are looking for. Initially we will pull 100 articles per category and increase that number as needed for the training file. An additional 10% of the records will be pulled in a separate file to use to test the model. The output records will be formatted for use by fastText. Each record will have one or more categories (or labels) associated with the article text.
Training models work better on clean data. For example, capitalization and punctuation can degrade fastText results. We will preprocess the data in an Article Cleaner process.
The clean data will be passed into fastText to train the classifier model. The test file will be used to validate the training model. Training parameters will be tweaked here to improve accuracy while maintaining reasonable performance.
Eventually we will seed our RSS Web Crawler process with the Alexa Top 1 Million Domains. Initially however, we will probably test by crawling a blog hosting site like Blogger. We should filter out RSS feeds by downloading them and checking last posted date and content length. This will favor blogs that post full article content over summary blogs. This is necessary so that we have enough content to make a category (label) prediction about the feed.
Our Labeled Feed Database will be generated by the RSS Prediction Processor. This process will call out to fastText to get a prediction of which categories match the RSS feed. It will also extract RSS feed metadata. This information will be combined to generate an output database of labeled (categorized) feed information.
The database should allow for being used as a way to browse by category (label). It should also allow for full text searching of Feed title and/or category.
The Labeled Feed Database should be able to be embedded in a client application. An RSS Reader is a prime example of where this could be used. A user interface should allow users to search the database, find feeds, and subscribe to them.
This high level overview should give you an idea of what we will build and how much work is involved. There is room for improvement. For example, the Labeled Feed Database is just a text search / browsing database. There might be something we could do with Machine Learning to better match search criteria with the labeled feeds.
Now on to implementing, iterating, and learning.
I’d like to make a directory of blogs that users can search or browse to find blogs that they are interested in. I don’t think there really is such a thing right now that works well at least. The only ones I know about are part of a subscription service, like Feedly.
It’s understandable why there really isn’t such a thing. There isn’t a lot of money in blogs these days. Marketing dollars have moved on to a highly privacy invasive model that doesn’t work well with decentralized blogs. Servers and search engines are costly to run. I think that’s why there isn’t a quality RSS search engine or directory outside paid services.
I’d like to discuss approaching it differently. What if we created a fairly compact database that could be bundled in an application? It wouldn’t be able to cover every blog or every subject, but it could hold a lot. It could be enough to help people find content so that they spend more time reading blogs and enjoying their RSS reader.
To build our small database we need to do a couple things. We need to find blogs and we need to categorize blogs.
A hand curated directory of blogs is too labor intensive. I don’t think there is a chance that enough volunteers would show up to create a meaningful directory. We’ll have to write some software to make this happen.
Fortunately one of the most studied areas in Machine Learning is Text Classification. This means that there are open source solutions and free vendor supplied solutions. Apple’s Create ML is a good free solution that includes Text Classification. fastText is an Open Source project that focuses on text Machine Learning.
I think fastText if the correct choice for a couple reasons. The first is that it supports multiple label classification and Create ML only supports single label classification. fastText is cross platform and will run on commodity Linux machines and Create ML will require a Mac to run.
To train our Text Classification model, we will need some input data. I think Wikipedia will be a good source for this. Wikipedia has good article content and categories associated with those articles. To process Wikipedia articles, you have to parse them. They are in a unique format that isn’t easy to extract. Fortunately there is mwparserfromhell that we can use to parse the articles. We should be able to get the input data we need now to train our model.
Assuming we’ve found a way to classify blogs, now we need to find them. Scrapy is an Open Source web spider that can be customized. I’m going to assume for now that since it is Open Source, it can be extended to crawl for RSS feeds.
All the components that I’ve found thus far are written in Python or have Python bindings. Everything I’ve discussed is stuff a Data Scientist would do. I’m not a Python developer or a Data Scientist, so I’ve got a lot to learn and a lot of hard work ahead.
That hasn’t stopped me from trying to figure out how all of this will come together. In Part 2, I’ll discuss the architecture and high level design in more detail.
I found an old photo from a couple years ago. It’s of my wife, Nicole, sitting along the bank of a river in Northern California. Just sitting along the river, listening to music, and having a few beers. It will be a long time before we’re able to do camping like this again.
I just got done implementing iCloud support in NetNewsWire. We are still doing preliminary testing on it and aren’t ready for public testing. I don’t know which release it will be in unfortunately. That depends on how initial testing does and then public testing.
I thought I’d write up some my initial impressions of CloudKit. This isn’t a tutorial although you might find some of the information useful if you are looking to develop with CloudKit.
The process for learning any major technology from Apple seems to be about the same and I found CloudKit no different. I read a couple blog posts on CloudKit when I got started. Then I watched some old WWDC videos on it. Then I searched Github for open source projects using it and read their code. Then I implemented it while relying on the API docs.
I found reading the code from an actual project to be most helpful. A blog post and couple WWDC videos only get you to the point that you are dangerous to yourself and others. CloudKit has some advanced error handling that needs to be implemented to work at all. It is hard to pick that up from only a few sources.
The hardest part about implementing the basic syncing process was leaving my relational database design knowledge behind. A CloudKit record is not a table, even if superficially they look the same.
This becomes very obvious when you look at how you can get only the items that have changed in a database since the last time you checked. CloudKit has a feature that will return you only the changes to records that have been made which saves greatly on processing. You don’t necessarily get those changes in an order that you can rely upon, so managing complex relationships between records isn’t recommended by me. I ended up doing a couple things that went against all my training to get it to perform well.
Once you figure out how to model your CloudKit data and understand the API’s, things fall together fairly quickly. We have other RESTful services that do syncing in NetNewsWire, and CloudKit is the most simple implementation we have.
One area that CloudKit outshines our RESTful service implementations is that it gets notifications when the data changes. This keeps our data more up to date. In the RESTful services, we sync which feeds you are subscribed to every so often via polling. This happens at shortest around every 15 minutes. Realtime updates to your subscription information isn’t necessary, but it is fun to add a feed on your phone and watch it appear in realtime on the desktop.
One thing I wanted to do was provide a centralized repository that knew which feeds had been updated and when. I planned to have a system that would use the various NetNewsWire clients to update this data and notify the clients. My theory was that checking one site for updated feeds would be faster than testing all the sites to see if their feeds had updated.
I ended up giving up on this task. I think it would have been possible to implement in CloudKit, but would not have been faster than checking all the sites for their feed updates. You see, we can send out hundreds of requests to see if a feed has been updated all at the same time. Typically they return back a 304 status code that says the weren’t updated and they don’t return any data at all. This is very fast and all the site checks happen at the same time. This is how the “On My Device” account works and it is very fast.
The reason I couldn’t get CloudKit to work faster than checking all the sites individually comes down to one thing. There is no such thing as a “JOIN” between CloudKit records. If I could have connected data from more than one record per query I could have done some data driven logic.
What I wanted to do was have one record type that contained information about all the feeds that NetNewsWire iCloud accounts were subscribed to. This would contain the information about if the feed had been updated. I needed to join this to another record type that had an individuals feeds so that I could restrict the number of feeds that a user was checking for updates.
I could have implemented something that didn’t use the “JOIN” concept, but it would have required lots of CloudKit calls. This required me to pass the data I wanted to JOIN in every call. It would have been unnecessarily complex and not performed better than just checking the site.
I think that CloudKit is amazing for what it is intended to do. That is syncing data between devices. I think it has potential to do more and I’ll be watching to see if Apple extends its capabilities in the future. There may be more yet that we do with CloudKit on NetNewsWIre.
The new iPadOS cursor is amazing. I’m so impressed by Apple on this one. They successfully reinvented a concept none of us thought even needed updating. I hope they bring some of these ideas to the macOS cursor.
If you have any problems or questions about the code, be sure to jump into the NetNewsWire Slack. We’ll help you out. Building NetNewsWire
Looks like a good day to stay inside and code.
This is what it typically looks like while I work on @netnewswire. #vanlife
I’m trying to get some @NetNewsWire coding done today and my “E” key on my keyboard is sticking. I have to keep prying it back up. This Macbook Pro is going into the shop tomorrow for repairs.
Since we wanted to stay close to Phoenix while we waited for our heater parts to come in, we went to the closest free campsite. Since is was close to the Phoenix suburbs, I expected the campsite to be crowded and it was. The campsite in total was less than the size of a football field and everyone was right on top of each other. The ground was uneven and in the center was pool of rancid water. It turns out this was the trailhead for a popular horse trail, which explained the large piles of horse shit everywhere.
I try to make friends when we are camping around other people. If folks like you, they will look out for your stuff when you aren’t around. They are also much more likely to lend a helping hand if you need it.
We were camped (parked) next to a fellow in a minivan. He had a table set up and was grilling polish sausages on it. I waved to him and he apologized for all the smoke he was creating. I told him that it smelled good and not to worry about it. He offered me a sample and I ate it. He then told me he’d gotten the sausages cheap and that it was dinner for his three dogs. I wondered if I was going to get sick from eating this bargain basement sausage as I looked over this man’s camp. He was clearly wasted, hadn’t washed or changed for at least a week, and there were beer bottles laying every where.
We chatted and I let him know that we had just recently gotten to Arizona. He proceeded to give me tips on the best places to go and when to go there. He’d been living on the road for quite a while and spent a deal of time living cheap in Arizona. His story was one I’d heard before. His motorhome had burned down the year before. Luckily he had been able to get his tow behind car away from the fire so that he could later trade it in on the minivan.
Nic and I only stayed one night there. I decided that while we were waiting for parts in Phoenix, we would wander towards Tucson and see what was around it. A little out of the way and about halfway there is Kearny.
The drive into Kearny is a nice mountain drive that gives way to mountains that have tiers cut into it. The scope of it is truly impressive. We later found out that this is the local copper mine and the reason that Kearny exists. Along the road to town there were groups of picketers at various points. These are the miners who are currently on strike. I found out from a local, who accused me of being a scab since I was from out of town, that the mine was trying to take way the miners health insurance.
Just right outside of Kearny is a lake. It is more of pond, but in the desert, I can forgive them for calling it a lake. There is a free campsite there and to our delight, it had amenities. Free running water, trash pickup, flush toilets and one site even had electric.
After staying for a couple days in Kearny, we decided to head to Tucson. We didn’t feel like exploring the city, so we just passed through on our way to camping out in the desert south of town. The campsite was just a pullout off of a rarely used highway. There was at least some fallen wood I could use to make a fire and enough flat ground that I could repair the bed.
Yeah, my luck hadn’t gotten any better on the trip. The last of the two latches on the bed gave way. The bed is a futon-like design. The base you sit on when it is a couch slides forward and the back folds down to make a bed. There are two latches that hold the sliding base in place when it is configured as a couch. The first one broke over a year ago and I had been putting off repairing it. Now I didn’t have any choice but to fix the problem.
I had foreseen this happening and carried two replacement latches with me. I pulled the bed cushions outside and started working. I had to completely remove the sliding platform. This is held in place on the heavy duty sliders with 30 counter sunk screws. I carry a small set of 12v power tools and the drill with the correct bit made this job much faster. After a couple hours I had it repaired and back together.
I grabbed a beer and sat down to enjoy the fire from the wood I had found and to relax.
It was part bad luck and part stupidity when my camp chair blew into the fire. I was staring off toward the horizon and smelled the chair burning before seeing it. I sat on the edge of my burnt chair and finished my beer. I decided then that we should probably just head back to Kearny. There wasn’t anything out here except rocks and cacti. At least back there, we had a close by town and amenities.
The next morning I got on the internet and ordered a couple new camp chairs. The chairs were almost in need of replacing even before I burned one up. I had them sent to General Delivery in Kearny, AZ.
I got in the driver’s seat and began the drive back to the campsite that we shouldn’t have left in the first place.
It is 50 degrees here in Kearny, AZ. It is also 50 at my home back in Iowa. I’m beginning to wonder why I came here. Hiding from winter isn’t working for me thus far.
This is how I spent my summer. NetNewsWire for iOS testing
It was cold. Colder then it was supposed to be and Nicole was shaking me awake. I looked at my phone and it was 4 in the morning. The auxiliary heater had stopped working.
I had been concerned about my battery bank even before we had left Iowa. The capacity of the batteries had been declining last year when we came home and I didn’t check the capacity again before we left. I realized I was rolling the dice a little bit, but figured we could make it another 4 months or so on the batteries we had.
My first thought when the heater stopped working was that the batteries had depleted too much and the safety features of the heater had kicked in. If the voltage is too low, it doesn’t want to damage your batteries and will shut itself off and display an error code. I tried starting up the heater again and got the voltage error code.
I started the van to get the alternator recharging the batteries and tried again. The heater still wouldn’t start and instead gave a different error code. This one didn’t make any sense. It was the lack of fuel error code.
I began the heater startup sequence again and went outside to check the heater. It is mounted under the van on the driver side frame rail. I noticed a lot of black material on the ground under the heater exhaust. It was carbon from the heater. My heart sank. Running the heater at too high of an elevation can cause the fuel-air mixture too run too rich and clog up the heater with carbon. I couldn’t understand what went wrong. We were only at 4000 ft and 5000 ft was the where problems were supposed to happen at.
The next morning after a couple of phone calls I found a place in Phoenix that would work on the heater. This is no small accomplishment. The heater is a Wabasto gasoline heater that isn’t even sold in the United States. It is used in Europe, but is uncommon here. Even the diesel version, used in Semi’s and some campers isn’t common.
We immediately left Cottonwood and headed to the repair shop in Phoenix. Once there, we grabbed the cats and headed to a private waiting room. A couple hours later (at their $130/hr shop fee) they came back and and let me know that a sensor had failed and that the heater would have to be disassembled to clean the carbon out of it. The repair estimate was actually more than I had paid for the heater itself.
If I was closer to home, I would have simply ordered a new unit and installed it myself. As it was, I was just happy to find someone who would work on it. I had them order the parts needed and Nic and I headed to a close campsite to wait it out.
I just migrated my personal website to micro.blog and shut down my old AWS/Wordpress site. Now I don’t have to worry about the security concerns that come with hosting your own site. A big bonus is that I now get all micro.blog’s social features.
After leaving Centerville, we mostly drove straight through to Arizona. We weren’t in a hurry though, so it took us 3 days to get to Flagstaff, AZ. Flagstaff is a beautiful city and we have camped in the mountains around it before. This wasn’t going to be our stop this time. There was already snow on the ground and the high altitude would have damaged our auxiliary heater in the van.
Instead we just passed through Flagstaff and headed toward Sedona. The drive down the mountains from Flagstaff to Sedona is very, very scenic. I didn’t get any photos because it was narrow roads and switchbacks most of the way. We just passed through Sedona as well because we were headed to a campsite about 8 miles south of there, half way to Cottonwood.
There is a road there that has lots of pullouts that people can camp at for free for 14 days. It was a very popular spot. You almost always shared a pullout with 3 or 4 other campers. And it was muddy. The mud was everywhere since this is the rainy season in Verde Valley.We settled in and hung out there for a couple of days, only going into Cottonwood to sign up again for Planet Fitness, buy food, and use the internet at the closest Starbuckis. It was surprisingly rainy for being in the desert and mud got everywhere. Eventually we learned to live with it.
I had bought some wood when we passed through Sedona and I was determined to have a campfire. I had no longer got the fire going and a beer opened, when it started sprinkling. Then it came down a little harder. Then a little harder. It was a full rain shower, but my fire was still burning, so I stayed outside partly out of stubborness, partly out of enjoyment. Nicole wouldn’t have anything to do with it. Those of you who know her, know how the cold affects her.
Eventually another rubbertramp pulled up in a SUV. She asked to join me by the fire and I obliged. We sat around and drank a couple beers while stoking the fire and shivering in the rain. She was a single mother who’s children had all grown up and moved out. So she sold her house, bought an SUV, kitted it out, and is driving around the US alone. Her boyfriend that had planned on traveling with her bailed out on her and left her to go it alone. She was on her way to North Carolina for Christmas and I imagine she’s doing fine. She seemed to be dealing with the regular road hardships without problem. Eventually it got too cold for both of us and we retired to our respective vehicles. She took off early the next morning, on her way to her next adventure.
One morning, I decided that Nic and I would head into Sedona and have lunch. The road that we were staying on looped around through the mountains and looked like it would be a scenic route.
As I was having my morning coffee, I noticed a couple of guys who were standing around a sedan staring at it. Eventually, one of them approached the van, so I got out and asked him if he needed help. They had a very low tire and no spare. I have an onboard air compressor, so I helped them out by airing up their tire. While talking with them, they had mentioned that there was some cave paintings up the road along the route that we were going to take into Sedona.
A short while later, Nicole and I were in a tour group at the Palatki Heritage Site. It was literally only 10 minutes down the road from where we had been sleeping.
The tour lasted a couple hours and was an easy hike. We really loved learning about these native and prehistoric peoples. The petroglyphs were very interesting and I found it somewhat amazing that they could last so long exposed to the elements as they were.
After the tour we drove the rest of the say into Sedona along a the washboard, potholed road that we had been living on for days. We had some amazing pizza for lunch and headed back to the pullouts for the day.
Wow. It’s been over 10 months since I’ve done a van life post. I’ll have to get people caught up and post some stories that I have been meaning to get to. My last post was about our trip to Padre Island. We lived in Padre for about 3 weeks. We then went to Big Bend National Park in Texas and then on home to Iowa.
Nic and I spent Spring, Summer, and Fall at home in Centerville, IA. Nic took up sewing and I worked on an Open Source project called NetNewsWire. It was very relaxing being home and around family. Nic and I found a lot of fulfillment in our hobbies and were very content this year.
But eventually it starts to get cold and wet in Iowa and we did begin to ache for a change of scenery. We did the usual prep to the van. In addition to the usual maintenance, I put in a new floor.
The old floor was made from the rubber matting you often find in gym’s around the weights. It was a kind of fuzzy version of the 1/4 rubber matting and was difficult to clean. Nic is a very clean person and hated cleaning it. And she let me now about it. Every time she cleaned. She cleaned a lot. So,. I put in a laminate floor for her. It is working out pretty well. The complaining (about the floor) has stopped.
With the van ready to go, we spent the Thanksgiving with friends and family and then took off for Arizona on December, 2. Our first stop would be Cottonwood, AZ. As usual, we wanted some free campling close to a Planet Fitness and Cottonwood delivered. More about our stay in Cottonwood in the next post.
Branching Strategies are controversial. Why is that? Why can’t we just pick a strategy like Git Flow and call it the one true way to do branching? The answer is that software development is too complicated for a one size fits all approach. Factors that can impact how you do branching:
What I am going to propose in this post is a minimalist branching strategy designed to fit the NetNewsWire project.
NetNewsWire is a small, open source project. It has a small core team that is trusted with full repository access. It has additional developers that contribute via repository forks and pull requests. Everyone is remote. There are two main products produced, an iOS app and a macOS app. The two products share code that should be kept in the same repository. There isn’t a comprehensive automated test suite. It has a stated project goal of zero known bug releases. There is a desire to ensure that the release process doesn't impede development. Source control is done in Git and dependency management is done using Git submodules.
The branching strategy I am going to recommend is an implementation of Three-Flow, a Trunk Based Development strategy. Do not read about Three-Flow right now. I’m going to give you an executive summary of Three-Flow and apply it to NetNewsWire. Besides the Three-Flow post has a lot of scary git commands and it specifically says it won’t work for a project like NetNewsWire. By itself it won’t, but it is a good foundation to start from.
(A lot of NetNewsWire development is done using Git forks and pull requests. This branching strategy accommodates that workflow, but I won’t be addressing it in this post. This post will focus on how development is managed for the developers with full repository access.)
Three-Flow uses 3 branches to facilitate development, stabilize a release, and manage production hotfixes. Development happens on Master and moves to a branch called Candidate when it is ready to be stabilized. Development continues on Master and bug fixes to the release candidate happen on Candidate. When the product is released, it is pushed to the Release branch. Hotfixes can happen on the Release branch. All bugs found and fixed are back merged to Candidate and then Master respectively.
All arrows going up are promotions (pushes) to the next environment. All arrows going down are back ports of bugfixes.
That is Three-Flow applied to NetNewsWire. It would be that simple, but we have two products we are going to deliver from the same repository. The iOS and the macOS variants of NetNewsWire. To stabilize and manage both variants, each will need to be given their own Candidate and Release branches.
Today (6/2/2019) we would need 2 branches, Master and macOS Candidate, in the main repository which will eventually grow to be 5 branches. There will also be a number of repository forks that NetNewWire developers will create to do bug fixes and implement new features (not shown here).
Each release should be tagged using Semantic Versioning. Candidates will continue to be tagged using the current convention which denotes the difference between developer, alpha and beta releases. Additionally, we will need to use a convention to avoid tag name collisions between iOS and macOS products. macOS will use even minor release numbers and iOS will use odd minor release numbers. (See the above diagram for examples.)
NetNewsWire uses Git submodules to manage project dependencies. All the submodules are under the same project umbrella as NetNewWire and there are no third party dependencies to manage. These submodules are mostly stable at this point. For simplicity sake, all development on the submodules will continue on their repository Master branch. These submodules won’t be managed as separate projects with separate releases/tags at this time.
There are 3 types of branches: Master, Candidate, and Release. All feature development happens on Master. Stabilization happens on Candidate. Hotfixes happen on Release. Each product gets its own Candidate and Release branches. All candidates and releases get tagged.
I feel this system is as simple as it can be, but not any simpler. The complexity built into this system buys us:
Feedback is welcome. We will be discussing this post in the NetNewsWire Slack on the #work channel.
I’ve put the finishing touches on Feed Compass 1.0 and uploaded it to the App Store. Feed Compass makes it easy to find and preview blogs. If you like the blog you previewed, it makes it simple to subscribe to it in your favorite RSS Reader. I find it a really useful app, especially if you are an Apple Developer since it has the iOS Dev Directory OPML files in it.
It is a free app, so there is no reason to not check it out.
Because one of the shortcomings of Feed Compass is the lack of OPML files available on the net, I built a companion application for it, Feed Curator. Feed Curator makes it easy to create OPML files and publish them for free on the internet. I wrote about the design behind Feed Curator in Curated Blog Lists.
In the end, the applications had references to each other so I wanted to release them together at the same time. Feed Curator 1.0 is also available in the App Store and is also free.
Give these applications a try. If you read blogs, you should check out Feed Compass. If you have a collection of blogs you think is special and want to promote, you will want Feed Curator too. Feed Compass and Feed Curator together make it easy to share blogs with the world.
In my previous post, I talked about curated listings of blogs. This time I want to talk a little bit about computer generated lists.
A following list is a staple in social media. Who you are following and who is following you are very useful pieces of information. You can tell a lot about a person from who their friends are or who they are interested in.
If we knew what blogs a person was reading on a regular basis, would could do the same kind of following recommendations that social media platforms do. For example, if Bob is following Suzy and Suzy is following Karen, then we could recommend Karen’s blog to Bob.
So how do we get that following information for blogs? Fortunately a good number of people still rely on RSS and use RSS readers to get their daily blog fix. The RSS readers themselves know who people are following. Also fortunately, the various RSS readers all allow you to export this information in a standard format.
(OPML has many uses besides managing feed subscriptions, but for the this article I am going to focus on subscriptions only. When I use OPML in this post, just consider it a list of RSS feeds in an open format.)
If we could get people to export their OPML file and upload it to a common database, we could do a simple algorithmic recommendation list.
It so happens that this database of OPML files already exists in feedBase. This is a project that allows you to upload and manage your OPML subscriptions. It also publishes a Hotlist of the top 100 most popular blogs in feedBase and exports it as OPML. It will be of no surprise to anyone familiar with RSS and OPML that this is one of Dave Winer’s projects.
Unfortunately, feedBase doesn’t currently do a suggestion list. feedBase, like all systems of this sort, could also use more data in its database.
Feed Compass needs more and higher quality OPML lists to make it more useful. Being able to consume a specialized recommendation list from feedBase is very desirable. Also, the more data in feedBase the better the lists from feedBase in Feed Compass will be.
I’ve spoken to Brent Simmons, the creator of NetNewsWire about making possible to directly request a users subscription OPML. This would be of course with the user’s permission. Feed Compass would pull the OPML from NetNewsWire and upload it to feedBase. This removes the friction of exporting it and going to the feedBase website to import it.
My hope would be that by making it easier to upload you subscriptions and getting a customized recommendation list for your effort, more people would be inclined to share their subscriptions with feedBase.
How I envision this working:
+----------------+ | feedBase | +----------------+ | | ^ | | | Hotlist | | | | | | Suggested | | | | | | | | | Subs V V | +----------------+ +----------------+ | Feed Compass | <-- Subs --- | NetNewsWire 5 | +----------------+ +----------------+
I don’t think that this idea is going to revolutionize how the web is used or that it's going to take down the social media silos. I do think it could make blog discovery better which could drive traffic to more blogs. It may be a small contribution to making blogging more mainstream again, but I think it is a worthwhile one.
I recently wrote app for the Mac called Feed Compass. It is an app that displays lists of blog feeds, allows you to preview them, and then subscribe to them in your RSS reader. It is designed to solve the problem of not having enough content in your RSS reader. The problem I’ve run into is that Feed Compass also has the same problem of not having enough content. In this case, it is not having enough lists of blogs to show to the user.
One solution to this problem is for users to take their personal listings of blogs from their RSS readers and upload them to a service for aggregation. The only service I know that does that currently is feedBase. Feed Compass already utilizes the Hotlist from feedBase. The Hotlist is the top 100 most subscribed listing. There are currently plans to do more feedBase integration to get more content from it into Feed Compass. What I want to talk about to day are curated or custom lists.
Feed Compass already has a small handful of curated lists that provide the majority of the content. Some of the best ones come from the iOS Dev Directory. They are awesome if you are an Apple developer. I think that Feed Compass needs more curated lists like this, but with a wider background of interests. The problem is that they simply don’t exist.
I think one of the things that makes iOS Dev Directory successful is that it has a process where people can submit blogs to be included in the listings. This process utilizes Github with forks and pulls for its workflow. In the end it produces an OPML file that can be used by RSS readers and Feed Compass.
This seems to me like a pretty good way to get a curated listing of technical blogs, but is too complicated for the lay person to use. I’d like to propose a new application to make it easier for for the average person to curate and publish a listing of blogs.
This application would be a Mac app originally, but could be ported to other platforms. It would be able to create and edit OPML files. OPML files have entries in them for blogs. All together the entries comprise a blog listing.
The application would allow you to drag URL’s from another application, such as a web browser, to it. It would then produce the correct OPML entry for that page, including finding the RSS feed in the page. There should also be a Safari plugin so the you can add the blog to the OPML listing without having to drag the URL.
For publishing, it would upload the OPML file to a Github Gist. It would also be able to edit OPML files stored as Gists. This gives us a way to distribute the OPML for free. As a bonus it also provides a full revision history.
It should also have a button to submit the listing to Feed Compass for inclusion which would produce a Github Issue that would be reviewed to see if the feed should be included in Feed Compass.
The application would be Open Source and MIT Licensed the same as Feed Compass.
This would give us a way to distribute and maintain OPML files without having to set up, pay for, or maintain a centralized server. As long as Github doesn’t drastically change its business model and start charging for Gists or Open Source projects that is.
If done correctly, it should be easy enough for your average person to add to and maintain the OPML file while they are browsing the internet. I envision people visiting their daily websites and adding to the OPML as they go.
It provides a workflow for reviewing lists via Github Issues to help prevent Spam from getting into the system.
Feel free to leave a comment below. You can also join the discussion on Github: We Need More Curated Lists.
One of the things I wanted to do on this trip was figure out where good places for Nic and I to stay weeks on end were. If you’ve read some of my earlier posts, you would know that I am primarily looking for free camping next to a city with a Planet Fitness in it. Padre Island and Corpus Christi fit that description perfectly so we had that as a destination, but we wanted to find more spots on the way.
Our first stop was to be near the Suwannee River in the Florida panhandle. It was about a 4 hour drive from Bradenton and the last stretch of road to the campgrounds was the roughest we’ve ever been on. I really appreciated the offroad suspension and lift.
One spot in the road looked impassible. Water covered the entire road and spanned about 20 feet of it. I dipped the from tires in to see how deep it was and we went for it. Water came up to the bumpers and the tires spun a little, but we didn’t have any trouble making it through. We figured we should at least have some privacy if you had to traverse a road like this to get to the campsite. What we ended up with was a mixed bag.
The view of the river was pretty nice. You could tell that campground was beautiful at one time. The problem was that it had been completely trashed. The grounds were torn up by 4x4’s and picnic tables burned.
Due to the vandalism, the place had a sundown curfew. Since it was close to sundown by the time we got there and we were tired, we decided to stay anyway. Besides, the campground was at the end of that crazy road and the government was still shutdown. Who’s going to bother us?
A state ranger in a 4x4 pulled up right after Nic and I got settled in and started reading our books. She wasn’t amused to see us after sunset and pointed to the big sign with red lettering that we had chosen to ignore. I decided to not play dumb and just told her we had been driving a long way and would be gone first thing in the morning. She wasn’t having any of it and ran our drivers licenses and plates. In the end she was cool and let us off with just a warning citation. According to her, the campgrounds had actually been cleaned up before we got there. The vandals had also chained up the port-a-potties and dragged them around in addition to all the damage we saw. I’d like to eventually visit again after they get it straightened out.
With no place to sleep, Nic and I decided to head to Apalachicola National Forest over by Tallahassee, FL. It was only about another hour drive or so. The campsites weren’t very nice. They were basically just open spots in the woods with dumpsters and port-a-potties brought in to support the deer hunters. The only thing interesting about Apalachicola was that the hunters brought their 4x4’s in on flatbed trailers because they broke them hunting deer so much.
I wasn’t really impressed with Apalachicola and it wasn’t close enough to Tallahassee to make the trip in town to the Planet Fitness worthwhile, so we left there and headed toward our next prospective campsite.
The next national park in Alabama we went to was closed down due to the government shutdown. The one we drove to after that didn’t exist anymore. I’d finally had enough. Out of the last 4 free campsites we had been to, only one was open and that one sucked. We decided to drive straight through to Corpus Christi.
Our next stop after the Everglades was Bradenton, FL. I wanted to stop and visit with some friends that had a winter home there. Friends is probably not a strong enough word. Second set of parents would be closer to my relation ship with Chuck and Barb Willkomm.
Throughout High School and College, Chris (their son) and I were pretty much inseparable. We lived together, partied together, and hung out at each others parents houses. More often than not it was at Chris’ parents house that we ended up. If it was late at night and Chris and I (often with other friends in tow) were hungry, we would raid his parent’s kitchen. Barb, rather than yell at us for waking her up and stealing her food, would visit with us and even cook for us. I even went with them on a family vacation to Wisconsin once. I have many fond memories of the Willkomm family.
I hadn’t seen Chuck and Barb in over a decade since Chris and I grew up and moved away from each other. It was a more emotional reunion than I had prepared my self for. Barb hugged me and burst into tears. I got a big lump in my throat and hugged her back.
The Willkomms have their main home in Branson, MO. There they see shows daily, even sometimes multiple times, a day. Barb was bringing a little of that with her to the Bradenton park community they lived in with a Neil Diamond tribute act. The performance was the same day that Nic and I got there and Barb was busy preparing for it, but not too busy to make us a homemade spaghetti dinner. It was amazing.
The next day, Barb and Chuck took us to see the manatees at a local power plant spill basin. The manatees like the warm water coming out of the power plant and it makes for good viewing.
They even had a petting zoo for stingrays.
After we saw the manatees, we drove out to Ana Marie Island and had a late lunch at The Rod and Reel Pier. It was a restaurant at the end of a fishing pier that had an amazing view of Tampa Bay. The food was awesome.
By the time we got back from Ana Marie Island, we were all wiped out and crashed for the night. The next morning, Barb was up bright and early and cooking us breakfast. After breakfast and having some coffee and visiting, Nic and I got ready to leave.
We had an amazing time hanging out with the Willkomms and will be trying to visit them in Branson next time we head that way. After we left Nicole told me that it felt like she had known Barb and Chuck forever and that we should stay longer next time. Many thanks to them for hosting us and showing us around Bradenton.
We left Key West after only a couple days. Looking back, there was more we would had liked to done there and probably should have stayed for another couple days. But, we prefer to camp in the wilderness and so we got back on the road and headed to the Everglades.
When we entered Everglades National Park there was no one at the entrance or the campsite checkin. It looked to us like it was unmanned because of the government shutdown. We have an Annual National Park pass, so it didn’t really matter to us. We wouldn’t have had to pay to get in anyway, though we would have to pay for a camping spot. We drove around the park and it looked to be in good condition considering the shutdown.
As we made camp someone asked me how to identify the camping spaces (they had the number painted on the slabs) because he had to got back to the checking building and tell them which campsite he was in. After helping this guy, I walked back to the checkin and paid for the campsite we settled into. They must have just been away from the desk when we drove through before.
Nicole and I decided to take a hike while we were there. We only got about a 1/4 mile in before being overwhelmed by mosquitoes. These suckers where huge. Like, steal your girl and fly away huge. Nic wasn’t having any of it so we turned around and went back to the campsite.
These guys were hanging out in the trees when I got up the next morning.
We only stayed in Everglades National Park for one day. It was hot and muggy. The bugs were crazy ferocious. I would have liked to do a canoe rental, but Nic was not going to get into a canoe with alligator infested waters.
So we hit the road with Bradenton as our next destination. Crossing Florida from Miami to the western side of Florida is a highway nicknamed “Alligator Alley”. It is well named as it has a canal running next to it that alligators have infested. I saw one roadkill alligator getting eaten by vultures on the way.
We didn’t make it all the way to Bradenton and stopped for the night at a State Park. We don’t like to pay for camping and that made two days in a row for me. Unfortunately, there just isn’t any free camping in the Everglades like you would typically see around a National Park. One thing that was cool about this campsite was that it had one of the walking dredgers that was used to make Alligator Alley.
This thing actually drug itself along the ground and scooped out mud and stumps so that the Everglades could be drained enough to build the road.
There isn’t much to say about this campsite. We just had a peaceful night camping before it was time to move on the next day.
Parking in Key West was very challenging. We got there late and there were few parking lots that didn’t have no overnight parking signs. We did finally find a place close to the beach that had an awesome view of the ocean and settled in for the night.
The next morning Nicole and I got up and Nicole asked me if I’d heard that bird that sounded just like a rooster this morning? I told her it was a rooster, but she wouldn’t believe me because the crowing was coming right from beside the van and there was no way that it was someone in town keeping chickens that close to us. I decided to get up and walk down the street a little and sure enough, roosters everywhere. They freely roam the streets of Key West scratching and crowing.
Nic and I spent the day bumming around in the van and at the beach. That evening we decided to checkout the shops and nightlife. We saw an original painting that Nic fell in love with. Thank goodness it was already sold, so there was zero temptation to buy it. It seems that people Key West really like their chickens.
The next day we walked to Hemingway’s House/Museum. One fact that was new to me was that Hemingway married into his money long before he was a successful writer. In fact, the Hemingway house was where he wrote all his novels.
The pool his wife put in while he was away corresponding for WWII to spite him. It really pissed him off because she donated the boxing ring he used to have there to the local brothel and she spent a small fortune building it. Everything under Key West is coral so digging a hole there is a real enterprise and very expensive. In the end there wasn’t much he could say about it (although he did) because it was her money.
This room above the carriage house is where Hemingway did all his writing.
Everything you see there is actually Hemingway’s possessions. His family sold the house with the furnishings in tact to the family who currently owns it. They had the foresight to preserve everything and eventually turn the home into a museum.
We spent the whole morning doing a guided tour and then touring the grounds. After that, it was time to jump in the van and head towards the Everglades.
Universal Studios was directly on our way to Key West. Since Nic is a huge Harry Potter fan, we thought that we would stop there so that she could see the new Harry Potter World that they put up there.
They divided the Harry Potter World across two different theme parks, Universal Florida and Islands of Adventure, so that you would have to pay for both worlds to get the full Harry Potter experience. It didn’t much matter to us, since the Islands of Adventure part of Harry Potter World is a roller coaster. Nic hates roller coasters, so we opted for just the Universal Studios theme park. That side has Diagon Alley.
Diagon Alley is a (I think) full size reproduction from the movies. It has shops where you can buy wands and use the wands to interact with various bits of scenery throughout the park. It also has the usual gift and food shops. You can even get Butter Beer! To help adults keep their sanity, they also had Wizards Brew, a fairly strong stout and other adult beverages. Butter Beer tastes like a buttery cream soda. It is really good and if they don’t bottle it to sell it, they should. Nic really loved the stuff.
Gringotts Bank has a cinematic ride inside of it that puts you in the middle of a 3D movie scene with the original Harry Potter actors as holograms. Escape from Gringotts was impressive and a lot of fun.
For lunch we ate at a Harry Potter favorite, The Leaky Cauldron. It served traditional British fare. I had fish and chips and Nic had a shepard’s pie. The food was good, but as you might have already guessed, it wasn’t cheap.
There was a lot more to universal than just Harry Potter world. I solo rode a couple of roller coasters. The Fast and Furious made me wish I had my nephews with me. It was full of fast cars, girls, and explosions. All the best stuff.
At about 2 in the afternoon, we had seen all of the park that there was to see and so we hopped back in the van and got back on the road toward the Keys.
We’re still in the Osceola National Forest at the West Tower Campground. This is the same campsite that we have been at for over a month now. Like I mentioned before, this place has basically all you could want in a free campsite. Our time here is coming to a close however. General gun season runs out on Jan. 6th. During general gun you can stay as long as you want (there usually is a 14 day limit), so it is almost time for us to move on. As for me, I’m good and ready to get on the road again.
Let’s do some catching up on what Nic and I have been doing all this time here. Our routine has been very consistent for the last month. We work out Monday, Wednesday, and Friday. In between, I code and read. Nic reads and plays Sims on her computer. Usually once a day or so I have an overly long conversation about politics and the economy with my neighbor Al.
Al is 75 years old and lives in a tent. He’s been homeless for over 6 years now. Before that he lived in a travel trailer that got destroyed in a tropical storm. He’s in pretty decent shape for his age and has a little pickup to get around in. He mostly listens to the radio or reads the paper during the day. He doesn’t have much family except for a couple of older sisters, so he likes to chat a lot.
Al and I couldn’t be more different on our beliefs. I don’t believe we are being invaded by Mexico, that Jews are trying to destroy our economy, or that there is an imminent race war about to happen. He probably believes me to be a bit naive because I don’t know about the Illuminati or the shadow government. Regardless, we do have long conversations and try to understand each other’s differing prospectives.
Our Christmas dinner this year comprised of stuffing, mashed potatoes, ham steaks (cooked on an open fire), and individual pumpkin pies. We had Al over since he didn’t have any family to spend it with and our family is far, far away.
All the food turned out really well. Nic is getting pretty amazing with nothing to work with but a hot plate and some collapsable cookware,
The campsite where we are staying at is an old fire watch station complete with tower.
I got bored and decided that I should climb it. Don’t worry. Even though the watch tower was long ago abandoned, it is in great shape. The steps are newer treated lumber and the whole thing barely swayed as I got towards the top.
Next Monday Nic and I are heading to the gym first thing in the morning and then driving towards the Florida Keys. We don’t have any plans other than visiting Hemingway’s house/museum. After that we will be leaving Florida and visiting friends in Bradenton on the way out. We’ll probably head to Texas after that. Who knows? ¯\_(ツ)_/¯
I was working on a program at the picnic table today when the rain kicked up again. It’s been raining here for about a day and a half and is expected continue for a few more. We had a break in the rain and I figured that I would get out of the van for a bit, but got caught outside.
We recently bought a tarp and some poles for our camping gear. The tarp is mostly to let people know that there is someone already camped in this site when we are in town. Today it kept my computer and me mostly dry.
You can’t really tell from the photo, but the rain was coming down in buckets. I was really surprised to see the deer hunters out in this kind of weather. I suppose they were probably surprised to see some guy working at a computer in it. I felt sorry for the dogs the most though. The hunting dogs that is.
This little girl had about had enough of it and was more interested in sniffing the grease drippings in my fire pit.
One of the things that greatly surprised me was how deer hunting in the deep south was so much different than how we do it in the midwest. Firstly, they almost completely hunt from their pickup trucks. It isn’t illegal to shoot across a road down here or to have a loaded gun in the vehicle. Heck, about half the pickups I see down here have a seat mounted in the bed of the truck to sit on and shoot from. They look like some kinda 4x4 bass boat or something. Oh, and everyone uses a high-power rifle.
The other thing they do is use dogs to run the deer. Some of the more responsible hunters put radio collars on the dogs and track them. You’ll see them with what look like old TV antenna’s, waiving them around the outside of their pickup trucks trying to track the dogs. This is the same stuff we used to watch Jim put on a lion in Mutual of Omaha’s Wild Kingdom.
Some guys don’t use the radio collars. There are these cages about every other corner on the gravel roads that say “Lost Dogs” in them. If you find a dog, you’re supposed to put it in the box and hope someone comes along and claims it.
The other thing I found surprising was how small the deer are here. I hear that if you get one over 100lbs, you got a good one. Maybe hunting small deer with dogs, pickups, and high-power rifles is more common than I knew about. It was all new to me and I found it all a little strange.
We visited Charleston on our way south into Florida. It was a beautiful city. We parked the van in an overnight public parking spot and walked around the downtown area. It was pricy parking spot at $30, but it is still cheaper than most camping sites, hotel rooms, and police station visits.
Most of you know that Nic and I like to go out and have a good time. With trying to get into shape we came up with a new regimen that cut beer out of our diet. Mostly that is. We allow ourselves a night of drinking every other Saturday. Our visit to Charleston landed on that. Since we had permission from ourselves to party and a safe place to sleep, we walked around Charleston and bar hopped.
We checkout out the bay and saw the sites. We didn’t take a carriage ride which seemed to be the big tourist thing to do in Charleston. The carriages were so prevalent that the parking lot we were parked at reeked of horse piss until they hosed it down at the end of the day.
No trip for Nic and I would be complete it seems without a stupid sign. You couldn’t drown in this fountain if you were face down and someone was pushing your head down.
We rounded out the evening at a fancy restaurant. We weren’t looking for an upscale restaurant, but pretty much everything in downtown Charleston was. It had been a while for us so we decided to treat ourselves. The place we went to was so fancy that they even had their own cookbook.
Yeah, the restaurant was actually named S.N.O.B. or Slightly North Of Broad. The food was awesome and well worth busting the budget a little bit.
It seems Nic and I are finally slowing down a bit in our older age. We were in bed by 10 and on the road the next day by 7. Next stop, Osceola National Forest.
We’ve been here for about a week now and really enjoying it. The campsite is posh for a free one. We have running water, a flush toilet, garbage pickup, and even an outdoor shower. The shower is interesting, but we shower at the gym. The campsite location is another win. It is only about 25 minutes to the gym and Lake City, FL. Lake City is big enough that you can get anything you want there.
It also is really quiet here most of the time. Not on Thanksgiving weekend though. The campsite filled up with people who brought in SUV’s to ride on the many surrounding off-roading trails. They (and a bunch of other campers) were up late playing loud music, drinking beer, and shooting of both guns and fireworks. They kept Nic up late, but I’m had no problem sleeping through everything. The holiday weekend crowd trashed the place, but the forest service was out promptly and cleaned up the area. It is back to peace and quiet now.
It looks like we are going to be here for a while. The location, weather, and amenities are better than we would find anywhere else. The usual limit for this campsite is 14 days in any given 30 day period. This time of year it is unlimited days until Jan 6th. The unlimited time is for hunters, but there aren’t really any of those around. Until further notice, Osceola National Forest is home-sweet-home.
Nic and I are still in the Francis Marion National Forest. We’ve been camping at different campsite than I told you about in my last blog post. This campsite is closer to town (and Planet Fitness). It has a through hiking trail that runs along it. I’ve been told that it goes for 450 miles. I’ve hiked the trail for a few miles while listening to podcasts and it is very beautiful. Enjoy this picture of a Smurf house I found along the trail (the house was about 5” tall).
I think the fact that this place is free to stay at combined with the trail and how close it is to town has made this quite the strange campsite. The strangeness was almost immediately apparent. The first thing we saw when we got here was 2 boxes of generic hamburger helper and a package of spaghetti noodles propped up against a log. There was also this bundle of flowers. There were also some tents, but no one around.
The tents themselves weren’t strange at first anyway. People often put up tents and go do day hikes or visit local attractions. It wouldn’t be until later that I realized that Nic and I were the only non-permanent residents. Yeah. We’ve been at this campsite off and on for almost 2 weeks and the tents don’t move. At about 8 or 9 at night cars start to show up to spend the night. Not every night, just most nights.
Stranger still are the tents I saw when gathering firewood even deeper out into the woods. Take the one in the next picture for example. It has porch, laundry lines, and various junk scattered around it. Hell, it even has a set of golf clubs as you can see in the picture. The notice on the front is a warning from a forest ranger about it being abandoned, but its date is Sept. 18th.
The inside was decorated with pallets and wicker furniture. My guess is that the poor homeless person that lived here got arrested and didn’t have a chance to come back for their golf clubs.
The strangest thing to me is this table that someone keeps moving around the woods. We’ll leave and go to another campsite for a day or two and this thing will be in a different place.
Nicole says she likes this campsite, but she does spend most of her time in the van reading. The one time she went out walking by herself, she didn’t get out of eyeshot of the van before returning and swearing she wasn’t going out on her own again.
This is our last night here. I’m not going to miss this place. Tomorrow (Saturday) we go into Charleston, SC for a date. The plan is to urban camp in Charleston after we have a night out on the town and then drive to Florida on Sunday.
We left Croatan National Forest a couple days ago. We enjoyed our time there, almost 2 weeks, but it was time to move on.
We set our sites on South Carolina and hit the road. Conway, SC looked good on paper. It had some free camping sites near by and Planet Fitness in town.
Planet Fitness has become something of a requirement for Nicole. She’s pretty serious about getting in shape and we work our at PF exactly 3 times a week. I’m enjoying the regular showers and the workouts add some entertainment to our days. We’ve been going for about 3 weeks now and are beginning to see some results.
But I was talking about Conway, SC. We got there and checked out the camping site. It was along another river though not nearly as nice as the White Oak River we had had been living next to. In fact this place was a little trashy. Broken beer bottles, cans, and other trash where everywhere. Heck, one guy even forgot to pick up his boat when he was done with it.
We decided to stay the night anyway. It was hardly the worse place we’ve stayed at. About midnight a pickup with open exhaust roared into the camping area. They decided it would be funny to do a donut and splatter our van with sand and mud. The racket they made woke Nicole up in a panic and she wasn’t able to get much sleep the rest of the night. It was now the worse place we’ve stayed at.
Strangely the pickup got suddenly silent after doing the donut. Then about a half-hour later it started back up and left quietly. It wasn’t until the next day that I understood. I went for a short walk and found where they had lost control of the truck and slid off the road and into the surrounding swamp. It looked to me like they had buried their front tires into about 2 feet of mud and water. They got the truck out, but given how long it took them to get it into 4WD, I think that they spent some time knee deep in swamp mud getting the front hubs locked in.
This negative experience meant that we were only one night in Conway. The next day we headed to Francis Marion National Forest where there are almost a half-dozen free campsites. We’ve been here about 3 days now and are enjoying it.
We have between 25-40 minutes to drive to the Planet Fitness near by, but it seems worth it so far. The weather is overcast, but it is warm and in the 70’s. The biggest complaint I have today is that some jackass over the hill is blaring country music and some other jackass is firing off a 22 so fast that there obviously is no aiming involved. Miles from anywhere and I have this shit to put up with.
In between workouts I’ve been getting some light reading done and quite a bit of coding. While I consider myself retired, I don’t want to let my skills degrade. Given the recent uncertainty in the stock market, it isn’t a given that I will never have to get a job again. Besides, I enjoy programming again and it passes the time.
We’ve been in the Croatan National Forest for a couple of days now. Nicole has been doing a bunch of reading and I have been teaching myself the macOS graphical API’s for an open source project I have been contributing to.
We are moving between a series of campsites along the White Oak river trying to see which we like best. There are 5 of these total, but only 3 are in the national forest. The 3 in the national forest are the ones we are looking at because they are the ones we can camp for free at. Most of the sites have launches so that you can canoe down the White Oak and get picked up at one of the downstream sites. There are also a couple of sites on the other side of the forest we will be getting to next week.
Nic and I also decided that one of the things we should be doing with our free time is getting into shape. We’ve started hiking a little more. Yesterday we went and got a membership to Planet Fitness and did a workout. We plan on going 3 times a week. Pricing wasn’t bad. We got Nic their premium Black Card Membership that allows her to any of their 1500 locations for about $25/month (including annual fees). It also allows her to bring a guest when ever she wants, so I can get in for free.
Neither of us have done much working out lately, so we’re both pretty sore today. On top of things it is raining all day and tonight, so today is a hang around inside the van and lick our wounds kinda day.
Nic and I considered our time with Lisa and Flash to be a vacation. We spent more than we usually do on dining, drinks, and gas. It was time to get back to just living life as simply and as cheaply as possible.
Some of the best places to do that are National Forests. In this case we went to the George Washington National Forest and a place called Wolf Gap. This was directly west of Washington, DC and on the border between Virginia and West Virginia. And I mean on the border. It was literally a 5 minute walk for me to get from my campsite on the Virginia side to the West Virginia border.
Dispersed camping often times means you get nothing except a spot to park in the woods. Wolf Gap had a bunch of nice-to-haves. Marked and maintained campsites, a pit toilet, picnic tables, fallen dead wood (for campfires), and fire rings were all available. They even had level tent pads for the tent campers (which was everyone except us).
Here are some close by things that you want when living off the grid and on the road. It had a public trash facility only 5 minutes away. Less than 5 minutes was an artisan well that was maintained by the local 4-H with safe drinking water to fill our tanks. It also had a nearby recreation area (Trout Pond Rec Area) that had cheap showers or if you wanted something nicer there were 2 truck stops just 20 minutes away. Finally, it was close to a city that had all the shopping you could need.
Also it had some kick ass views if you were willing to hike a couple miles to see them.
Wolf Gap was about as good as it gets except it was too cold for Nic. Days were 50-60 degrees with nights getting down to the 30’s. I put a Webasto heater in the van this year, so it stayed at what ever temperature we wanted day or night. This made it possible for us to stay a week at Wolf Gap, but it didn’t give Nic a lot of outside time. She mostly stayed in the van and read. I was sitting outside enjoying an all day fire either listening to music or reading as well. The cold doesn’t bother me the same as it does Nic.
The wife wanted something warmer, so we hit I-95 and started heading south into North Carolina and the Croatan National Forest. We just got here today, 10/23/18, and set up camp. It is about 20 degrees warmer here.
My blogging station for the night.
We had a couple of big items that we wanted to accomplish while being in New England. We wanted to see the fall colors, lighthouses, eat lobster rolls, and go to Salem, MA. We got plenty of views of the foliage in Maine and Vermont. The lighthouses were kind of a bust. The really nice one we went to, you couldn’t get close to and it was covered in scaffolding.
Salem surprised me by how much fun it was. This is where we got the lobster rolls (for Nic and Lisa) and did some touristy stuff we hadn’t done for much of the trip.
Parking was a nightmare in Salem. The public parking garages had 6’ 6” clearances. We left Flash and Lisa’s RV at a campsite and took the van, but it is still closer to 8’ tall than 7’, so we had to find alternate parking. We ended up in a private parking lot that cost $30 (which Flash covered for us). Other than the cost the parking lot worked out well because it was centrally located.
Right away we saw the Witch House. We plopped down our $8 a piece to tour the place.
Inside was a historical look at colonial life.
Salem itself was interesting this time of year. Lots of people and kids were dressed up in costumes and walking around long before Halloween. Sorry, I didn’t get any pictures of them. We also walked around town, grabbed some coffee, and eventually ate at a really nice restaurant to get the lobster rolls.
We even checked out an old cemetery.
At the end of the day, we made our way back to our campsite. The next day we dropped off Flash and Lisa’s RV at the rental place and took them to the airport.
The airport is in downtown Boston and we dropped our friends off right at 4:00 PM. Boston rush hour is a real hassle. Nic and I left Boston and headed to Providence, Rhode Island to spend the night. We rested our heads in the parking lot of a Cracker Barrel restaurant.
The drive between Boston and Providence was basically one long traffic night mare. Nic and I had decided that we’d had enough of cities and decided to head out to the wilderness. We set our sites on West Virginia and the next morning hit the road.
Everyone told us that all the tourist go to Bar Harbor. So we went. We just visited the town and missed things like Thunder Hole because we were a bit fatigued from driving that day.
After that we headed to Glen Ellis, Maine. We got some gloom and rain or this would have been one of our favorite camping sites.
About 20 minutes away was Glen Ellis Falls where we hiked in to view one of the largest waterfalls locally.
We spent a lot of time in Maine and it were ready to see some other states. On our way to Vermont, through New Hampshire, we took the Kangamangus Highway which was very scenic this time of year.
We ended up in Stowe, Vermont. One of the first things we did was check out a local maple sugar farmer who sold us some really good maple syrup. We then headed out to checkout out some local breweries. One we went to was The Alchemist. The beer was disappointing, but the architecture was impressive.
One common beer theme that we liked about New England was that you could get Shipyard Pumpkin Ale everywhere on tap. It tastes almost just like a pumpkin pie. If you want they will coat the rim with a brown sugar mixture. We found it to be too sweet, but it must be popular because it was always offered to us when ever we ordered a Shipyard Pumpkin Ale.
More in Part 3...
We spent 10 days with our good friends Flash and Lisa. Nic and I had a great time with them and it was lots of fun sharing the van lifestyle with them. They had an RV, but it was a Class B and not much bigger than a converted van. We did stay primarily in campgrounds. I think they got the good parts of van life and missed out on the scrambling to fill your water tanks and taking daily sponge baths.
The It was pretty cold along the Maine coastline.
Here we are out on a little hike on a small mountain. There was originally a fire watch station here. You can start to see some of the famous New England fall color in the landscape.
Later we found this awesome lakeside campsite. The weather was in the 70’s and view was beautiful.
Here is Mako giving attitude. She really enjoys being outside and since it was so warm those few days, she got to enjoy the outside. Flash and Lisa even took her on a leash guided tour of their RV.
Here is Flash babysitting for us.
After our lake campout in Maine we hit up a an huge antique store on the way to our next destination. I fully expected Lisa to pack their RV with some finds, but she had lots of self restraint.
More coming in part 2...
On our way to meet up with Lisa and Flash we stopped at Niagara Falls
Yeah, some people are so stupid that we do need signs like these.
After we left Niagara Falls, we went looking for some free backwoods camping. We ended up sleeping on a pullout just south of Syracuse, NY.
I wanted to stop at the Volo Auto Museum after I saw it on Roadkill. If you haven’t watched Roadkill, I highly recommend it. It is car guy reality TV without any faked up drama.
One of the first things you see when entering the museum is their Dusseldorf collection. Nicole and I both agree that these cars have to be the height of automotive manufacturing and design ever. We also learned a fun fact. The phrase, “That’s a doozy” comes from peoples admiration for the car.
Another cool thing about the Volo Auto Museum is that it is basically the worlds largest hotrod car lot. You can buy about any tricked out car you can think of.
They also had a ton of movie and TV Show cars.
We only spent a couple hours there, but had a great time. I highly recommend it if you are ever in the area.
Nicole and I left Centerville on Monday, Oct1, 2018 to travel around New England with our friends Flash and Lisa. Our plan is to pick up Flash and Lisa at the airport in Boston, MA on Friday. We’ll be taking them to an RV rental place where they are getting a Class B RV to convoy with us for 10 days. We don’t know what we are going to do for 10 days, but I’m sure we won’t run out of things to see.
After our friends head back to Iowa, we are going to head south to stay out of the snow. We’ll probably be gone for 4-6 months living on the road. I’ll try to keep the blog updated with what we are up to.
Nicole wanted a map to track which states we have been to. We stuck the map on our refrigerator as you can see below.
So far on this trip we have been to the Volo Auto Museum, in Illinois, and Niagra Falls. I’ll post separate blog posts for each of those. We’ve been driving for 3 days now, so the Northeast United States is getting filled out.
We are currently in a Walmart parking lot in Westborough, MA for the night. The sign when you enter this Walmart says “No Overnight Parking”, so we’ll see if we get over looked or not. We purposefully built the van to not look like someone is sleeping in it for this kind of situation. We’ve done multiple nights in Walmarts that banned parking before, so I don’t expect any problems. I’ll let you know if we get rousted. :-)
I just ordered a new iPhone. I’ve been using the same iPhone 6 Plus for 4 years now and am tired of it not fitting in my jeans pocket when riding a motorcycle. If I didn’t want a smaller phone, I probably wouldn’t have upgraded. More on that later...
I ordered a silver iPhone Xs and a green leather case for it. I think it looks pretty sharp. With the camera bump on the back, I think that the case is required equipment. The total cost was over $1200, but Apple is giving my $100 for my old phone.
Although it is a lot of money, I feel really good about the purchase, especially after watching the September Keynote address. In it Apple stated the making phones last longer is a specific business strategy for them. Horace Dediu has a great right up on it, Lasts Longer.
Essentially, without the carrier subsidies, people aren’t upgrading their phones as much. I used to update mine every 2 years. Back then it wasn’t like they were going to make your monthly bill smaller after paying for 2 years, so everyone always got a new phone. That was awesome for the companies selling you phones.
Now that the subsidies are gone, that incentive to upgrade every two years is gone. My wife and I held on to our phones for 4 years and she still doesn’t want to upgrade hers. So what are these companies to do? I think Apple’s approach is to be known as the company that makes phones that last the longest and charge more for that. What they are willing to do is sell you a phone that lasts 2-3 times as long for 1.5-2 times the price.
I think they are really serious about this. I’ve been running the test version of iOS 12, the version that comes out this month and it is impressive. It made my old iPhone 6 Plus run faster than the day I bought it. That is great, considering that most operating system releases try to take advantage of the new phone hardware to add new features. That makes the older phones run slower. Not this time. It sped up and it sped up a lot. And iOS 12 supports some really old phones. It supports the iPhone 5s that came out 5 years ago.
My advice is this. If you are in the market for a new phone then take into consideration that you will hold on to that phone for 3-5 years now. Does your phone get regular software updates? Will it be getting software updates after 2 years? And if you have an iPhone that is feeling slow and think it is time to upgrade, wait a minute. The new iOS 12 is coming out soon and your old phone might feel brand new again.
Apple has a winner with Xcode 10. With macOS Majave it gained Dark Mode which many of you have seen in other IDE’s and pro level tools. Not only does it make it look futuristic and cool, it is actually reduces eye strain by reducing how much brightness is pushed at you.
Beyond that, I’m having trouble putting my finger on what is so much better about Xcode 10. It feels like I am fighting it less often. Maybe it is because I have gotten more experience with it over the years, but I think something is different. It’s like the searches are more accurate and navigation takes you where you expect it too. Autocomplete seems to work better too. It is certainly more stable than Xcode 9, even while still in beta.
I’m not quite sure I would say that Apple has caught up to Jet Brains or Microsoft for IDE’s, but I think they are closing the gap.
I started the blog and a couple others in early 2017. My thought was that I would write posts about IT and Vanlife and keep them separate. That was way too ambitious for me. At the time I wasn’t very interested in the IT world and the Vanlife posts seemed to take too much effort.
I ended up settling on using Instagram to post Vanlife pictures. It had a simple interface and made it easy to crosspost to Facebook. The icing on the cake was using an Instagram plugin for my Wordpress site that automatically pulled Instagram content and integrated it. It worked pretty well. For a while.
One day the Instagram integration stopped working because they made their API more restrictive. Pretty irritating, but I got it back up and working. Then Twitter began shutting developer API’s and it got me thinking more seriously. Then Facebook shut down the ability to do cross posting through its API. Twitter did it to try to make more money and Facebook needed to do it to stop the Russian troll bot farms. Regardless it got me thinking.
It is just a matter of time before Instagram shuts down the API I am using to integrate it with my site completely. They don’t make any money off of it and it doesn’t drive much traffic to their site. When that happens, all my content is locked up there.
So to clean everything up, I merged all the content from my different blogs and Instagram into this one blog. My strategy from here on out is to only post content to my own server and then crosspost links back here from Facebook, Twitter, and/or Linkedin so that people can find what I have been posting.
I also am using a pretty nice piece of blog posting software, MarsEdit 4. Now with the right tools and the right strategy, you’ll start hearing more out of me.
We lived just outside Yellowstone and The Grand Teton parks for over a week. The West Yellowstone campsite was our favorite. It was right next to a babbling brook where I could set up my hammock and read all day. There was also plenty of fallen deadwood to use for the fire. It was pretty much everything you wanted in an off-the-grid campsite.
This is shot I took while trying to cook foil dinners in the rain. Nic is hiding inside and all the furniture is stuffed under the front of the van to keep it dry. I was determined to keep the fire going for the 40 or so minutes it took to cook dinner. It was close, but we campfire dinner that night.
Napa Valley, The Lost Coast, and Wolf Creek.
Big Sur and Yosemite we some highlights from when we left LA this spring (2018). We stayed at some pretty amazing spots, but they rarely had cell service. To get cell service, we stealth camped in Woodland, CA. After that we head to Napa Valley to check out the wineries.
Here is one advantage to having a low top van. I was able to park it in a friends parking garage in LA when we were visiting California this spring (2018). Unfortunately, most parking garages are still too short especially now that the van has been lifted and has solar on the roof.
There is a saying in the #vanlife world. “You can live in your van or out of your van.” My original intent was that we would be living out of the van and mostly just spend time in it when sleeping. That didn’t exactly work out how I had intended. We spend much more time than I had expected cooking inside, working there, and hanging out inside to escape bugs. Still, besides having to put my pants on bent in half, I haven’t missed having a hightop. The way the van is set up on the inside almost everything can be done from the sitting position including cooking.
Some nice advantages to having a low top van are that it is more stealthy. That is, it looks more like a contractor’s panel van than a camper. This is handy if you are sleeping somewhere that you aren’t supposed to. The low top also allows for more square footage on the roof. This means that more solar can be fit on the roof than some other vans.
For now the low top van works great, but I wonder about 15 years from now. Will I still be flexible enough to get my pants on?
Lake Havasu was too hot for us, so we went to the Mojave Desert. Really. It was much cooler in the Mojave due to being farther north and, I think, a higher elevation.
This spot was one of my favorites. We were by ourselves a good distance from the highway. There was a town 30 minutes away and we had excellent cell phone data reception. I think that this might be a place to spend an extended amount of time in the winter.
Here we are camping near Lake Havasu, AZ. On top of the roof rack, you can see our 400 watts of solar panels. This powers all our accessories in the van including our halogen hot plate for cooking.
This is another shot of the solar panels at our home in Iowa. Next to the solar panels are the sand ladders. If I get stuck in sand, mud, or snow, I can place those under the rear wheels to (hopefully) drive myself out of any holes I’m in. They also serve as a platform to walk on the roof to check out the view or to clean the solar panels.
Here’s is the family enjoying the Grand Canyon. We didn’t spend much time there. We basically looked at it, found a campsite for the night and took off the next morning. Maybe next time we will take a hike down into the canyon, but this time we didn’t feel much like fighting the crowds.
Here are a couple of pictures of the van with the new suspension from Weldtech Designs. I had it installed in San Diego, CA when we were out there this spring (2018).
They lifted the front 3” and the rear 1.5”. Previously the back sat higher than the front, so the van is pretty much perfectly level now. It is amazing how the van rides and handles. Better in every way, both on the highway and off the beaten path.
Here are a couple shots I took while driving to the Rubber Tramp Rendezvous in 2017.
Making a successful app takes a lot more than just writing good code. Here I write about what I did outside of Xcode to make Bind It.
Bookmarks or leaving tabs open didn’t really work so well. Eventually, I ended up with a mess of bookmarks that I considered non-permanent and cluttered up my bookmarking system or I lost the tab I’d kept open for weeks.
My brother lives in a cellular dead zone and if I was watching his kids for him, I had an opportunity to catch up on some reading, but no internet. Besides, articles disappear. They get deleted or moved behind a paywall on the internet. I didn’t know how soon I would get to an article and didn’t want to lose it if I took too long. I needed a solution that didn’t require a constant internet connection, kept my articles organized, and kept them perminantly.
My wife had different needs that I wrote Bind It for. She likes to read amateur horror stories from a subreddit called Nosleep. Some of them are long and she would often lose track of ones she wanted to finish. She also uses the Reddit iPhone app to read them and the text in it is small and hard on the eyes.
eBooks are packages of text and images that are portable. eBook readers, like iBooks, allow you to store these eBooks, manage them, and view them. The reader will scale the text for you and even allow for different viewing themes that work better for day vs. night. eBook readers work to reduce eye strain.
What we needed was an app that created an eBook on the fly from a web page. It needed to figure out which images and text were important. It needed to search for and throw away ads and user comments at the bottom of articles.
My software companies name also needed to be updated. Vineyard Enterprise Software, Inc worked well enough for the name of the staff augmentation company I started in the 90’s, but didn’t really stand up well now that I focus on web and mobile development. Vincode is shorter and easier to remember, so I trademarked it. It fits the modern web and mobile world better. Apple doesn’t allow you to use tradenames, so until I formally change the name of the company it will show up as the longer, older name for now.
My current plan is to run a trial advertising campaign on the amateur story subreddits, like Nosleep, to target them specifically. The readers of these subreddits are one of the main two use cases for Bind It, so hopefully conversions will be decent. At $1 per 1000 impressions, it should be affordable too. Based on the initial trial marketing campaign on Reddit, I might roll the campaign out to other short story sites on the web. If anyone has a site that they think would work well with Bind It, let me know. I’ll make sure Bind It works for it and may help support it with advertising.
I registered vincode.io since it is common to use the “io” top level domain for tech companies these days and vincode.com was already registered.
This article will also appear on LinkedIn. I’m hopeful that posting on LinkedIn will get a handful of early adopters during the initial soft rollout. This will find more bugs before I start advertising could head off some negative reviews when I ramp up marketing. I’ve already put out two revisions of the app based on early testing.
If any significant social activity springs up around Bind It, it will most likely be on Reddit since I plan on advertising there. I created a Bind It subreddit where I can get feedback from people. Hopefully participation will be higher there since the people I advertise to will likely already have a Reddit account to post under.
The social game for Bind It is a little weak. This is an area that I plan to continue to work on.
As a child of the 70’s, Elvis was everywhere. He was on the movies that Channel 13 played in the afternoons on summer vacation. His music was all over the radio. His cool blue vinyl record was spinning on my grandmother’s record player when she would baby sit my brother and I.
When I woke up cold in Wisconsin and saw that directly south of me was Memphis, TN, I knew where I was going next. With Nicole and the cats snuggled together under layers of blankets, I started up van, turned on the heater and hit the road.
I forgive you for thinking that I am an straight up idiot for being in Wisconsin in November without an auxiliary heat source in the van. I just haven’t gotten to that yet and you will think me an idiot for other reasons.
I was using one of those small ceramic heaters that you can pick up at Walmart for $20. They worked great in my condo and I had a really beefy inverter to power it, so I should have been fine. Unfortunately, the heater cut itself off after a few minutes of working. The heater was rather old and these things die periodically, so I trashed it and replaced it with a newer model from a Wisconson Walmart.
It was about 4:30 in the morning when I finally had to stop for gas. The gas station was out in the middle of nowhere and was mostly abandoned except for a single clerk. This was one of those dirty gas stations that has a diesel and oil soaked gravel parking lot out back. My luck kicked in while I was putting gas in the van. I heard a hissing noise. I must have ran over something in the parking lot of this shit hole station.
Since I was stuck until a tire shop opened, I figured I’d wait until morning to call AAA. I got the new heater out and plugged in. It didn’t work at all. Figuring that the inverter wasn’t strong enough, I set up the generator and plugged the heater directly into it. The heater immediately cut out.
I was cold, tired, and feeling inadequate because I couldn’t keep my weird little family warm. Now, I’m not the type that misdirects my anger at inanimate objects. I don’t slam doors or break dishes. Something snapped in me and smashed the shit out of that heater all over that gas station parking lot.
Finally Nic talked me into trying to go back to sleep. I did so reluctantly and in defeat.
When we got up later in the morning, I was thinking more clearly. I realized the leak in the tire wasn’t that bad and that I could air up the tire and drive 5 miles back up the road to a tire shop that opened early. I didn’t need to wait on a tow truck or change a tire in the cold on oil covered rocks. Once we got the tire fixed, we were back on the road to Graceland. The patch in the tire didn’t take and we had to stop in tire shops two more times before we finally got it done right, but we took it in stride.
Social media makes everyone’s life seem more glamourous than it actually is. Generally people don’t post embarrassing or boring photos or stories about themselves. The various #vanlife Instagram accounts always show people waking up next to the ocean or a mountain lake. Nobody posts how they woke up in a Walmart parking lot again or in a shithole Illinois gas station with a flat tire.
With all the drama from this car getting stolen, I thought I would let everyone know why I bought a 1974 VW Beetle when I am downsizing and planning on living in a van.
For those who missed it or aren’t Facebook friends with me, about a week ago I bought a classic VW Bug. The doors didn’t have any keys, but I only paid $3,700 for the car so I didn’t expect anyone to risk going to jail to steal it. It is probably about the cheapest car in my apartment parking lot, so I figured a different car would be targeted first. Wrong. I got home from having lunch with my friend Jerry and it was gone.
I should have realized that the car stood out and may be worth more than I paid for it. The first time I bought gas, I was approached by someone who wanted to talk about it. My apartment manager half-jokingly offered to trade me her ’06 black Beetle for it. I parked in front of Dunkin Donuts to get a coffee. The girl who got me a cream donut and coffee had that glitter in her eye and flirty welcoming smile that could melt any man’s heart. I half expected the bug to wink back at her and for them to leave together. Everyone loves that car.
I put a post on Facebook and asked everyone to share it so that if someone saw it they could call 911 and report it. A good friend, Vicky, posted it to her neighborhood group. Vicky and I both live in the Little Italy district in Omaha and someone in her group saw that it was only a few blocks from my house. Luckily I was able to retrieve car with only minor damage to the ignition lock. It is now securely in a garage awaiting an electronic alarm.
Now it is time for me to come clean. My plan for the car is to turn it into a bugeye Baja Bug. Like this one:
This is where the conflict comes in. I hadn’t expected to buy such a nice Beetle. The car I bought only has only minor surface rust on the pan. The interior is in better shape than my Corvette’s. It has 65,000 miles on the odometer, that I naturally disregarded for a car that old. After closely inspecting the car, there is a good chance that number is accurate. Replacing the engine, transmission, suspension and about half of the bodywork and interior feels… just wrong. When I told my brother, the plan he had an open look of disgust on his face that mirrored my own gut feeling.
If you try hard enough, you can justify any bad behavior to yourself. I just keep telling myself that by “donating” the parts I remove on eBay, that there will be many more complete bugs in the world for it. In fact, by putting more bugs in the world, I’m doing God’s own work. Or is that Hitler’s own work? Uh… This isn’t working.
The idea to make a Baja Bug came to me in Quartzsite, AZ while there for the big RV show and RTR. Several people I saw there were driving around in Sport UTV’s that were licensed as motorcycles and legal to drive on the roads there. The Sport UTV’s were so small, that people parked them in the strangest places and got away with it.
The usefulness of crossing an ATV with a golf cart are immediately obvious if you are staying on BLM land near Quartzsite or some other area of the Southwest. You can tear across the desert and get to places that your van or RV can’t get to. You can also set up camp with your rig’s awning and exterior furniture and not have to take it down to go and get water, food, and other supplies in town. The major downside is that you need a motorcycle license for them, which Nic doesn’t have. They also aren’t legal to drive on the streets in most state that we would be visiting. The cons ultimately outweigh the pros, so no go for the Sport UTV.
But hey, no worries. Previous generations had a better solution anyway.
Update 02/07/18: I didn’t ever get to finish this project. It was a situation where I needed to focus on finishing out the van build and stop getting distracted with side projects. I ended up selling the little Beetle for what I had in it. I still believe in having a Baja Bug if you are driving around a full size RV, but it really isn’t necessary when you are using a smaller Class B like I have.
I still want a Baja Bug and may still get one, but I will probably buy a finished one. It doesn’t seem like anyone gets their money back out of the conversion and it is just cheaper to let someone else take that hit.
Dad and I had had just about enough of trying make this 2014 Ford E-250 into the perfect cross between a stealth camper and a Sportsmobile. My struggles with depression had already delayed this project months beyond where it should have taken. It was already the beginning of November 2015 without having my vision of what the van should be fulfilled.
My mom has an eloquent way with words. As she would say, “It’s time to shit or get off the pot”. It was time for Nic and I to finally hit the road. We finally just decided to throw the cats what we thought we would need into the Van and get going without much of an itinerary. The only place we really knew we were going was The House on the Rock. After that, it was just the open road.
The House on the Rock was a destination because Nic had been there on a family vacation as a child and had fond memories of the trip. I had never been there and anywhere besides Omaha, NE was good by me at this point.
It is easy to see why the House on the Rock would be awesome to see as a kid. It is pretty awesome to see as an adult. I won’t ruin the experience for you, but the whole thing is basically an eclectic collection put together by a nutty 1970’s free bird. Right up Nic’s alley.
It doesn’t matter how much the SSRI’s impact my libido. I’m not giving this a try.
There was only a couple problems with the House on the Rock. It was in Wisconsin, it was November 2015, our little space heater wasn’t working, and no one in my little family has much body hair except me. Nobody was having much fun that first night sleeping in Wisconsin.
I woke up in the middle of the night and made a decision. I looked at what was straight south of us on the map, got behind the steering wheel and, started driving while everyone else slept. We were on our way to Graceland.
For those who don’t know, I lost my mind in the summer of 2015. As I told a friend who asked if this came suddenly, “No, this was a long time coming.” I had been trying to fight off anxiety and depression using alcohol and willpower for quite a while, but that only works for so long. At the time, I didn’t know that I had limits to the amount of stress that I could handle and I piled it on recklessly. In the end, I burned out and had a mental breakdown that left me unable and unwilling to work in my profession as a Software Architect.
A contributing factor may have been my age, at the time 43 in years. Was it a midlife crisis? Couldn't be right? After all I didn’t have difficult children, crippling debt, or a loveless marriage. I already had a motorcycle, a red corvette, and a hot younger woman. I should have been safe. Regardless, I still wanted to run away and live a life of solitude as many men do at this age. In the end I chalked my breakdown to hereditary major depressive disorder. There is something of a history of this on my maternal side of the family, so the theory is plausible. Don't worry, I'm medicated now and doing fine.
More and more I’m coming to realize that many of those who wander aren’t necessarily looking for something, but trying to avoid the world and all that comes with it. At the Rubber Tramp Rendezvous one woman stood at a group meeting and said that she no longer needed anxiety medication since hitting the road. Several heads started nodding in agreement at this comment. I know when Nic and I travelled it helped me tremendously.
I heard another long woman traveler speak about nature deficit disorder, where not being in nature enough can cause personal distress. I think there is something to this. We watch the movie Blackfish and and are horrified at how we keep orcas in constrained captivity. Then we go and sit at the same desk in front of a computer most of our waking hours, stuck in our own Westworld loops. All this completely without sense of irony.
My brother reads a favorite blog, Bowman Odessey, where the author is clearly struggling with this very subject. Or maybe I’m coloring it with my own experiences. You decide. Regardless it is well written and worth killing a couple minutes here and there. Check it out.
Hi. My name is Maurice C. Parker and I am the sole founder, President, and head janitor of Vincode. Vincode is the new trade name for Vineyard Enterprise Software, Inc. Welcome to my company rebranding and rebooting.
I started Vineyard as a company for me to develop that did staff augmentation for Fortune 1000 companies and government agencies. Since that time, my career has had many twists and turns. In short, I never did grow the company into a staff augmentation company and used it over the years for mostly tax purposes. The new plan is to use it is a vehicle to release mobile apps and do freelance consulting.
Back in the 90’s the name Vineyard Enterprise Software, Inc was pretty good. There were lots of three word company names that got shortened to their acronyms. Domain names weren’t a primary concern when naming your company back then either. If so, I wouldn’t have spent the last 20 years typing (or spelling over the phone) a 30 character email address, firstname.lastname@example.org. The old name doesn’t represent what I do now or the industry that I work in. Vincode is shorter, more contemporary, and my new email address is only 13 characters now, email@example.com.
In the interests of self-promotion, I will be writing regular articles for this site. I have over 20 years of industry experience as a consultant and many of those experiences are very unique. I think sharing them will help enlighten others out there that don’t have the same number of battle scars that I do. I will be cross-posting them to both Medium and Linkedin, so if you already like those platforms, you can find them there as well. The topics will be about Software Architecture, SDLC, Project Management, Change Management, Quality Assurance, Ops/DevOps and other IT disciplines. More importantly, I will write about the interactions between these disciplines and how that can impact overall project productivity.
I hope you’ll stick around.
I'm at the Rubber Tramp Rendezvous or RTR for short. For those not familiar with the term rubber tramp, a rubber tramp is basically a hobo on wheels. The wheels can be anything from the largest RV or bus to a motorcycle. All types of person are rubber tramps from all different socio-economic classes. On one side of me is a retired circuit board designer from Intel. Across from me are hippies selling barley and beet juice shots.
I'm here because my wife Nicole and I are in the process of becoming full-time vandwellers, a subclass of rubber tramp I guess. I came down here to learn more about the lifestyle from the people who live it all the time.
In the photo above is my rig, the white cargo van. I'll put up detailed posts about the different modifications made to it to make it into a stealth camper van. Stay tuned. Behind my white van you can see a blue-black van owned by Youtube blogger Dave2d. Check out Dave's channel and others if you are curious about the lifestyle.
While, I am fairly new vandwelling, I do have some experience. Nicole, the cats, and I lived out of the van for 2-3 months total last year to see if living this way suited us. It did and now we are working toward becoming full-timers ourselves.
This process for us is well underway. In this blog I will document how over the next year or so we prepare for finally going full-time. I will also do flashbacks to fill you in on the journey thus far.