Sometimes the project finds you…(part 2)

I intended to write this just after the events took place, but never go around to it.  Now that Soylent News is coming up on it’s third birthday, I finally put together a little bit of the back-history.  The following can be found as part of a third anniversary story on Soylent news.

In some of the pre-history of SoylentNews, here is some of the stuff that gets lost in the mists of time around the first coordinated development effort — running on a VM, on a laptop in my basement under the slashcott.org domain. The slashcott had been announced and was to commence in some number of days. A bunch of folks thought it would be an awesome idea to get an independent version of slash running in time for the slashcott — what could go wrong?

3 years and ton of life changes for me, makes some of this a little fuzzy, but I’ll do my best to put things together. I’ve relied heavily on my email archive of that time which helped spur a bunch of memories. Hopefully this will be a coherent tale. (Maybe for next year I’ll mine my personal IRC logs from when we were still on freenode).

At first there was a bunch of coordination in the ##slashcode channel on freenode, a bunch of emails were also buzzing around trying to coordinate some things and ideas. My first email to Barrabas was on 02/06/2014 about wanting to pitch in. The issue at hand was that “slashcode”, had been hastily open sourced 5 years prior, then pretty well abandoned. Not only did you need to build the perl modules from scratch, but it would only build against Apache 1.x. Once you managed to run that gauntlet, even compiled and installed, things barely ran and were pretty horribly broken. Anyway, it soon became apparent that robind, NCommander and myself (mechanicjay)were making the most progress on getting something running, as I recall Robin was the first to success in getting an installed running site, but his VM was stuck behind a corporate firewall.

In the meantime, I had gotten the domain slashcott.org registered while trying to build things myself. At some point, a bunch of us decided to combine forces, robind shipped me his VM, I got it running on my laptop (as it was the only 64-bit thing I had at the time), we got myself and Ncommander ssh’ed in and we started hacking. For some reason, RedHat vm’s were horribly laggy on my openSuse VirtualBox host and work was slow and painful, but progress started to be made.

The only bug I’ve ever fixed in the code base was a critical piece of the new account email/password generation stuff, as I recall the generated password wasn’t actually getting written to the DB. (sadly the evidence of my contribution has been lost, I think I shipped the fix to either robin or ncommander, so they have credit in the git history). Regardless, it was a critical piece – I have an email dated 02/08/2014 with my new account/password, which worked — it was a huge boon and let us start to let a couple people in to start hammering away to find front-end bugs (of which there were countless). The next big thing I see from mining my email is the first “Nightly stories email”, which came out on 02/11/2014 (from the slashcott.org domain). I think we ended up with about 50ish users on slashcott.org (gosh I hope I still have that vmdk stashed somewhere).

On the night of 02/11/2014 (or very early morning of 02/12/2014), after giving up and going to bed (I had a new born and was teaching an undergrad class on the side in addition to my regular 9-5 — I was beyond toasted after a week). The VM locked up hard (it had done this a couple times, but I was always available to poke it with a stick and bring it back. As I was unavailable and no one had exchanged important things like phone numbers yet, Ncommander made the executive decision to spin up a linode, which was great. The laggy VM on the laptop wasn’t meant to last forever, though I admit I had visions (delusions?) of hosting the site myself on some real hardware at some point. In retrospect, Linode has been an amazing way to run this site and absolutely the right decision.

I got my new account on the li694-22 domain, on the 02/12/2014, that new account email was for mechanicjay, UID 7 — which is where I live on the site to this day. I kept the slashcott.org server in sync with code changes for a bit, and was a pretty handy testing platform, until the “official” dev box came online on 02/14/2014. At some point during this week, we had landed on the soylentnews.org domain and that’s where we went live on 02/17/2014.

So there you have it, we went from a group of independent pissed off people with no organization and an abandoned broken codebase to launching an honest-to-goodness site in ELEVEN fucking days.

Posted in A Day in the Life | Tagged | Leave a comment

Retro Data Structures

One of the more fun things I do with my spare time is to play around with old computers. Specifically, I enjoy my Atari 800. I recently started thinking about a small game to write on the machine, something with a small map, which you can explore. Think Zork, on a very, very limited scale. This is mostly an exercise for me to see if I could pull this off in Atari BASIC.

If you were to make, say a 3×3 grid with a bunch of data attached to it, without getting all Object Oriented, you might choose simple data structure such as a 2-dimensional array, to retrieve data associated with your particular x,y coordinates on the map. A graphical representation of this data might look like this:

       1                2              3
 1 The start       A treasure!     A river.
 2 A monster!      Inscribed Rock  The wizard.
 3 A forest.       A bird.         Home

Atari BASIC, has three basic data types, number, character, and boolean. It also has arrays, you can make an array of numbers which is a standard thing, even today, or an array of letters. You may be tempted to call this a string, and it is referred to as such, but if you think of strings in Atari BASIC as character arrays, you’re life starts getting easier. You can also make a mufti-dimensional array of numbers. A think that you absolutely cannot do, however, is make a mufti-dimensional character array, a matrix of strings if you will, at least not in a basic straight forward way. This limitation hit me pretty hard. Living in the modern age, I’m used to slamming together data-types in a multitude of different structures, without worrying too much about it.

So, given this limitation, how do you get all that string data into a data structure that you can reference by some sort of position? One place where Atari BASIC helps us out is that I can reference positions in strings and substrings quite easily, which turns out to be the ugly key we need.

Say, I want an array to hold 3 things. myarray$=”Mary Bob I really like dogs, they are my favorite.” If I wanted to get the word “Bob” out of this, I’d call for myarray$(6,9). Mary would be myarray$(1,4), the sentence would be myarray$(9,51). The issue of course is that all the lengths are irregular. I can’t simply retrieve the nth element without knowing it’s position in the larger string. But, what if we make the string lengths regular? First determine what the longest string you’re going to allow is. In this case the sentence about dogs is 42 characters. Then, multiple, by the number of elements you’ll be holding. 3*42=126, so declare a string 126 characters long. Something like the following BASIC code:


10 ELEM=3
20 MAXLEN=42
30 DIM MYARRAY$(ELEM*MAXLEN)

Now, you can reference the different elements by using MAXLEN as a multiplier to get the proper positions. Bob would be MYARRAY$(43,84), or MYARRAY$(MAXLEN,MAXLEN*2-1)
Mary, would be (1,MAXLEN-1). We can wrap the whole idea in a subroutine (No functions here kids!) make the positional calculations:


40 REM GET THE 2nd ELEMENT
50 GET=2
60 GOSUB 100
100 REM ELEMENT RETRIEVAL SUBROUTINE
110 START=GET*MAXLEN-MAXLEN+1
120 END=START+MAXLEN-1
130 PRINT MYARRAY$(START,END)
140 RETURN

The interesting thing to me about this approach is how incredibly space inefficient it is, especially noticeable when you’re working on a machine with 48K of memory. It’s also an good reminder about the kind of stuff that has to go on under the covers in our nice modern languages to make them so comfortable to work with.

Remember though, I’m interested in a matrix of strings! It turns out that with a little math you can extend this scheme to make a 2 dimensional array of strings as well. All it takes is another multiplier in there, which incidentally makes this an order of magnitude less efficient.


10 ROW=3
20 COL=3
30 MAXLEN=50
40 DIM MYMATRIX$(ROW*COL*MAXLEN)
50 REM POSITION
60 X=2,Y=1
70 GOSUB 100
100 REM MATRIX RETRIEVAL SUBROUTINE
110 START=X*MAXLEN-MAXLEN+Y*MAXLEN-MAXLEN-1
120 END=START+MAXLEN-1
130 PRINT ARRAY(START,END)
140 RETURN

In this way by manipulating the X and Y variables, and calling the subroutine we can retrieve different “cells” of data in our matrix.

Go ahead and stare at that second basic program for a few minutes until the math sinks in. The start position is calculated just like in the 1 dimensional example, with an additional Y position offset.

This approach will work decently well, for smaller grids with not too much data. Say a 3×3 grid, with each “cell” containing 255 characters or so, results in a use of just under 2.5K. What if you wanted a larger map though, say a 9×9, well that’s 20K, almost 1/2 your memory.

The strategy for dealing with this, is to break your 9×9 down into 9 different 3×3 grids. Since this, in theory a map that we are traversing, imagine another variable to hold your current “grid” number, and subroutine to calculate what grid you’ll be in when you move. If it’s different, load the new grid grid information from disk. In this way, you can keep the memory foot print pretty small, and 2.5K loads pretty quick from a floppy drive.

When I finish up this exercise I will post the code so you can bask in its glory.

Posted in Retro Computing | Tagged | 1 Comment

Gitlab Upgrade: Postmortem

This is an article I drafted and forgot about. It dates from January 2016

This week at work we had an upgrade fiasco. If we want to lay some blame out there, I’ll take 50% for inadequate testing procedures, and lay the other 50% on the GitLab development team for making a breaking change to a versioned API. The basic issue is that they ripped out some functionally out of a particular API with the idea of integrating it with the main API. Not only did they not replicate all of the functionality, but they also crippled the old API, making it mostly non-functional. Did I mention that this change is completely undocumented, to the point where the current documentation reflects the old API functionality. You can read my ticket for this issue here: https://gitlab.com/gitlab-org/gitlab-ce/issues/5599  As of this writing, it seems my issue is not being taken seriously.

The upgrade took place on Tuesday. Classes start on Monday. This New Years on Friday, so we have a short week. What’s a boy to do? I wasn’t interested in rolling back for two reasons. 1) By the time the issue came to light, there had been a bunch of activity on the system. Restoring the pre-upgrade database dump from several hours prior would have introduced dataloss, which I think is a bigger evil than broken functionality. 2) GitLab has a very aggressive upgrade cycle, They release a new version monthly. This wouldn’t be so bad, except that once a new version drops, the previous versions are abandoned. As I only upgrade the system 4 times per year (at quarter breaks) [The GitLab project has since started back-porting security patches a couple versions], I always start the quarter with the  latest version — this seems the most responsible from a security and functionality standpoint. Given the above, I felt the only option available to me was to restore the missing API functionality. Here’s where this analysis will descend into a curmudgeonly rant.

Ruby on Rails — WTF. seriously. I’ve avoided Ruby stuff for years, because every time I’ve attempted to look at a RoR codebase I`m come away with a sick feeling that I’m incapable of  understanding it. For those of you who don’t know, the basic philosophy of RoR is to make  extensive use of templating and whatnot, so that you only need to write a *very* small amount of code in order to do things. This idea works, if you have your application setup properly, you can change an entire database call by just by changing a class name, or pull different data back from the database with a very small variable change. The downside to this is that it’s damn near impossible to figure out what is actually going on under the covers. The whole thing takes programming abstraction to a near terminal degree making it very powerful, but impenetrable to mere mortals unless you drink (main-line?) the Kool-aid.

Regardless, I dug into the code. I started by replacing the missing code file which described the part of the API which was missing. I then made an API call to that location and followed the stack trace to see what dependent files were missing. Through this iterative approach I was eventually able to get the API calls to stop generating Server errors. It was at this point that the *real* work began. As the data is now being held in different locations, I needed to try and grok the way the templating works in order to modify the database calls to:
1) point to the new table
2) reference the different column names where the relevant pieces of data are held.

This took quite a long time as I was getting classname conflicts. Some of the supporting files, were defining and using a class which pointed to now empty database tables. Eventually, I figured out how to change the class that was being called for the particular database calls I  was trying to make. Eventually I was able to retrieve the relevant data. Then came the hard  part.

The hard part was to modify the handler for a POST request to dump data in the new database tables. Using some of the knowledge I’d gained in the first step, I was able to get it to reference the new table, then it was a (sort of simple) process of updating some column names and suddenly…it worked.

The end result, is that I`ve replaced the 3 API calls that one of my faculty member’s scripts are using. He is using this for management of hundreds of course repositories and associated Continuous Integration Build runners. Doing it manually was just not an option. The issue with all this, is that it’s a Hacky McHackerson solution, with broken pieces of code (even some security validation routines) commented out. At 4 pm on New Years eve, as I and my faculty colleague were hacking away on this and testing it I said, “This is the worst thing
I’ve ever had to do in my professional career.” I finally got it finished about quarter after midnight on Jan 1. Upon further reflection, I stand by this statement. Not only was a large amount of time to fix something which should not have been broken, the actual code I hacked in that’s running it is kind of an embarrassment – This is mostly due to me being unfamiliar with RoR paradigms.

Lessons Learned:
1) Have a better testing methodology in place.
2) Be sure that you get sign-off from your power users before
pushing an update to production.
3) Having gotten a little bit of experience with RoR, I
understand it’s power more fully, and hate it even more.

Posted in /dev/random | 1 Comment

A Matter of Perspective

Over the last two weeks, I saw two talks. One was from RMS, yes THE Richard Stallman. The other was a panel of industry professionals on Cloud Computing, with representatives from Universities, Google, Microsoft and Amazon.

I found Stallman’s talk inspiring. I think he’s failry bat-shit, but honestly, he’s an extreme force for good in the universe, a needed extremist, if you will. I find his idealism and his adherence to principles rather refreshing, when so many times people put cost, as in bottom line,  ahead of all else. One thing he said which stuck with me was basically, “When discussing free software with people, many are unwilling to be inconvenienced at all, in which case they’ve assigned a value of 0 to freedom.” It basically comes down to how much control of your stuff is worth to you.

One thing, I’ve come to realize over the years, and it’s something I keep coming back to every time I’m let down by a 3rd party: No one cares about your stuff as much as you do. This leads to all sorts of life decisions. I maintain my own cars, this way I keep the safety of myself and my family in my hands. I run GNU/Linux everywhere that is reasonable in my house, because my data and usage habits are not available to highest bidder.  As a family, we made a decision for my wife to stay home with the kids instead of going back to work. All of these decisions lead back to my previous statement.  As such, Stallman’s message rings true with me at a fundamental level.

When listening to the Cloud Computing Panel, I found myself getting more and more frustrated on a couple of points. Overall, there was a condescending attitude towards folks who had not yet drunk the Cloud Kool-Aid — aside from that, they talked about Security and Control. Basically, it came down to, “Just trust us”, but that’s the fundamental problem isn’t?  A commercial for profit entity cannot be inherently trusted.  Their purpose is generate revenue and profit.  They are restricted in what they do to you, the customer, based on the contracts involved.  In most cases, they have better lawyers than you who are writing those contracts.   Based on this, how can anyone place unwavering trust in a business?

On the Security front, they talked about how physically secure their Data Centers are.  An anecdote explained how even the COO of Some Company couldn’t get access into the place because he didn’t have the proper clearance. Honestly, anyone can lock a door and hire some rent-a-cops to keep people out, it just rang really hollow as a measure of awesomeness. They also spoke about data security of having the backups automatically done, (but be sure you clicked the right button in their interface), where as maybe your local people aren’t doing back-ups right. So, we’re suppose to trust that your people are doing backups right, but mine aren’t? Why? Enterprise Backup has been commitidized for a while now, if your people aren’t doing it right, you need new people.   Regardless, no one is immune to a failure on this front: http://gmailblog.blogspot.com/2011/02/gmail-back-soon-for-everyone.html

What was more telling, however, was what they didn’t say. They didn’t talk about surface area exposure.  They didn’t talk about the size of the target they represent, with the minor aside that as a multi-user platform from the start it was designed to be secure from the start — whatever that means. Most importantly, they didn’t talk about protection from Government and Law Enforcement Agencies, which if you’re not aware can be a sticky subject.  Just because Vendor A distributes your data around the world to keep it safe in case of hardware failure, that data, due to it’s physical location can be subject to *very* different legal statuses depending on the country in which it’s housed.  Honestly, this is one area where housing your own data trumps what any 3rd party vendor can offer hands-down on a couple of points.  1) No one is getting at any of my data without a warrant, do I have that guarantee from them? 2) Assuming someone is going to get at my data, I’m guaranteed to know about it and can take appropriate legal measures as early as possible. ** NSA and Chinese hacker groups not-with standing, but there are no guarantees with the vendor there either.

On the control front, they came up a little short for me as well. What almost everyone on the research side is worried about are the elastic costs involved. Having a “reserved instance” is a way to hedge that, but running a VPS is not revolutionary — and I can get one from any number of providers with a very straight-forward pricing structure, something for which AWS many times comes up short.  Also, it doesn’t actually use cloud compute cycles for what their good for — which is scaling out dynamically to handle burst loads. They talked about all the granular controls you can have, and it turns out to be a lot of administrative overhead to get everything setup properly.  For folks who have a grant and have X amount of dollars to spend, it still makes sense for them to spend some % of their grant on a (perhaps) beefy box that they own.  They know the cost of that box and they have exclusive use to it.  This is especially true for a research endeavor where your compute needs are only vaguely estimated at the outset

For those of you who don’t live in this world, the analogy is such:  Do I make a capital investment in buying a house, which I then have for my exclusive use, or do I rent rooms in a building?  I can have a different number of rooms each day, depending on my needs, which are new and varied each day.

Let’s be honest, renting time on some else’s systems is nothing new. This was the paradigm of computing basically until the micro-computer revolution of the 1980’s and PC revolution 90’s wrested control away from monolithic computing companies. It is the hard-won victory of control and ownership that the Cloud movement is asking us to give up.  Don’t forget to pay your bill at the end of the month for the privilege.

Posted in /dev/random | 1 Comment

Computer Scientists — The Next Generation

I’ve just had the privilege of teaching an Intro to Computer Science class for the last two Semesters at the University where I work.  The class covered basic first semester programming topics, such as functions, loops, conditionals, arrays and hash tables.  The class had about 20 students each semester and is designed for majors and non-majors.  It is however, the a prerequisite that all majors need before taking pretty much any other course in the department.  What a heavy responsibility!   What follows is a list of musings about the experience.

1) What a joy watching a group of people, not know how to program a thing on day one, and by the end write good CLI versions of Hangman and Vigenere Ciphers.

2) “The Digital Generation” is a concept touted by people who don’t work with young people.  Just because someone has a handle on the UI, doesn’t mean they know *anything* about how it works.  How many people know how to drive vs understand the finer points of Multi-port fuel injection?  So it is with the under 20 crowd and electronics.

3) Some folks take the Intro to programming course to cover a Math/Quantitative Gen-Ed requirement thinking it’s going to be easy.   I try to dispel that notion on the first day and tell them this is a hard class, they’re going to be contorting their minds to work in brand new ways.   The class ramps up slowly, so they don’t always believe me right away.

4) We forget how beautifully naive “normal” people are when it comes to how hard it is to write bug-free software.  It’s not like writing a book or drawing a picture where there is value in nuanced meaning and imprecision.  It’s more like concurrently writing and following a recipe for the most complicated meal you’ve ever cooked/invented and if one ingredient is off by even the most infinitesimal amount, your kitchen explodes and burns to the ground.   I think the above is my new favorite analogy.

5) I make them use a central Linux server and write their Python in a plain text editor.  I’m sure there are all sorts of people who are making a lot of money by trying to make programming “accessible to the masses” who would tell me that I’m doing the students a great disservice.  I’ve looked at a number of these tools and programs and haven’t been impressed with any of them.  Software Engineering is hard trying to dupe people into thinking that your magical method will make learning hard stuff super simple is disingenuous at best.    The best method to teach *anything* is to start slow with the basics and work your way up.   Its the same basic problem with math curricula designed to be easy — the result is kids who can’t think in an organized, methodical way — which is the basis for all science.

6) I had a Senior Art Major, who had put off her Quantitative Gen-Ed requirement until the last possible semester due to dread and ended up being one of the strongest students in the class.  I think once she got her head around the problem space, the new language and the linux command prompt, she really dialed into the world of abstractions and patterns that is software development (and art) — it was neat to watch.

7) The coolest feedback I’ve gotten, from multiple students independently was, “This is my hardest class.  This is my favorite class.”

8) Even Adjuncts get better offices than Full Time IT staff.  I really don’t want to clean out my temporary office space.  It’s big, it’s quiet and there is only one person in it — me.

9) I’d love to pursue this path, but it’s hard to get more letters after your name when you’re on good career path in industry.

10) Seeing people get into a bad habit early and being unable to break them of it, is frustrating.  “How many times do I have to tell you guys? Don’t ‘break’ out your loops.  If you using break, you’re doing it wrong.”  If I teach the class again, I’m just going to forbid it.  Also exit() to get out a function.  From here on, those are worth a zero on graded assignments…For whatever that is worth.

 

Posted in /dev/random | Tagged | Leave a comment

Sometimes the project finds you…(part 1)

I have been, for years, an unabashed Slashdot.org fanboy.  I’ve been accused of being a Slashdot apologist by friends who feel that Slashdot hasn’t been relevant for 10 years.   To them, I say, its relevant to me and one of the few places on the internet (miata.net being the other) where the level of discussion is a cut above the noise of lolcats and Candy Crush.  So what made  quit cold turkey in February and not go back?

First, some background:  Slashdot was a guy’s personal project, which went commercial and had a few corporate parents over the years.  Mostly the parents just stayed out of it, until the latest acquisition last year, when the site was bought by Dice.com and the founder departed the company.  After paying some ridiculous amount of money for a cornerstone of the internet, with an unrivaled  population of opinionated neckbeards , Dice decided they needed to monetize the nerds.  Some of the changes this brought on were fairly innocuous, others mildly irritating, but it was mostly fine. 

Now, the core piece of Slashdot, the differentiating factor from other news aggregation sites was the commenting system.  You don’t only get a +/- rating with no limits on a coment, your post gets a rating from -1 ( troll) to +5.  These ratings are bestowed upon you by moderators, not just any reader.  Anyone could login to find they had mod points to spend.  I generally would get modpoints at least once per month.  But it was moderation of the people by a random always changing subset of the people.   The *really* interesting overlay on this, is that the more + mods bestowed upon on your comments, the more Karma you’d gain – essentially a site wide reputation score.  Slowly over the last few years, they’ve been dumbing this system down to the point where with the latest revision, the entire commenting system and moderation system has been complete gutted in an effort to “make the site more mainstream.”

So what’s a nerd to do?  Boycott the site of course.  So we did — a lot of us.  It was obvious with people committing to the “Slashcott”, that there was an opportunity there.  Slash, the custom back end system that runs (ran?) Slashdot was open-sourced years ago.  Could the nerds re-create their own paradise?  A ton of folks started working on stuff, going inm different directions, but efforts soon started to coalesce around the altslashdot.org effort, myself included.  Soon, it became obvious this is where the action was going to be.  We had 5 days until the start of the Slashcott, surely we could have a site up by then.  A bunch of us started independent efforts to install Slash.   As it turned out, this was no simple task.    A hastily (carelessly?) open sourced code base, which has been essentially abandoned for 5 years began to paint a harsh reality for us.

Part 2 will talk about that first 10 day’s effort.

Posted in /dev/random | Tagged , | Leave a comment

Config files

I’m working on setting up a test system at work. The system is a test environment for a major upgrade of an existing Web Application used across campus. As such, we’re testing all the fiddly bits, getting it hooked up to the ERP system, make sure single sign on and SSL termination is working right, etc.

The initial setup went well, but noticed that there were some strange behaviors here and there:
1) Javascript was broken in chrome
2) JSON errors on some pages, mostly that had an upload/drag-n-drop widget on them.

A quick F12, to bring up the net console in firebug, quickly revealed, that I was getting 503 errors for certain resources on the page. I also noticed that though I was browsing the site in https, some includes from the css were being requested over http, getting an http->https redirect from the SSO reverse proxy and ending in a 503. That mostly explained the issues above. Chrome just wasn’t having any of that, and FF sort of soldiered on and gave you a mostly working page.

This application is a php app and employs a fairly standard config.php in the docroot to setup the db connection, variables for the site, etc.  A quick look through the config.php, showed that in fact, I had defined the webroot at http://footest.bar.edu, instead of https://footest.bar.edu.That explained the http includes from the css. a few keystrokes: [a s [esc]:wq!] and done, No Big Deal™.

Back to my Browser, hit F5 and…the site is completely broken, with a message to the effect of, “Sorry, you can only browse this site in https!”. We used to make fun of my Grandpa for yelling at the TV. I’m sure my kids and grandkids will make of me for yelling at my terminal sessions. So I recheck the config file. Nope, everything is correct, including the parameter which tells the software that it’s behind an SSL proxy. Changing the value of the $sslproxy from true to false and back again, doesn’t have any effect…curious.

At this point, I lept down the rabbit hole. I tried changing all the $webroot, $reverseproxy and $sslproxy variables to any value I could think of. The only thing that had any effect was the webroot, which either broke javascript, or the entire site. After some amount of time doing the same thing and hoping for a different result, I started to dig into the code (Yay opensource!). I traced the issue down the url validation code, (which is a total wtf in itself). The developers of this product, are concerned with a couple of things. 1) That you’re going to access the site over http accidentally, when you should be over https. This is a valid security concern. 2) That you are never ever going to access the software by any url other than which is defined in the config. This is paranoid. To the point, that there was a completely infuriating comment in part of the validation code that read,

// hopefully this will stop all those "clever" admins trying to set up product
// with two different addresses in intranet and Internet

 
I’m sorry, there is no argument you can make to me where I shouldn’t allowed to configure access under as many different urls as I want. Oh! It’s because you’re hardcoding links and references in the database based on the defined $webroot? Your product is broken. absolute URLS have no place in a modern dynamic web application.

Anyway, that aside, I found that this url validation block was executing, though it was wrapped in a if (empty($sslproxy)) block, meaning that if the $sslproxy variable was set, this should not execute. So, I added some debug code to the function and determined that as far as the code was concerned, when this function was called, $sslproxy was not set.

Curiouser and Curiouser!

Okay, back to the config, lets step through and…wait a minute, there is an include to a setup.php at the bottom of the config file…


What if I set my $sslproxy before the include and…BINGO.

This is the crux of my rant here.
1) I have not ever seen a config file where things get ignored if they happen to be at the bottom of the file, which is a reasonable place to put site specific customizations.
2) Why on Earth is the config file calling anything? The config file should only ever be called by other stuff.
3) If you’ve chosen to implement a broken system with the above, at least put a comment in the code, something along the lines of, “#Add all site specific customizations above this line, else they may not be respected”.  My site specific config file now has just such a comment.

Oh well.

Posted in A Day in the Life | Tagged , , | Leave a comment

Zypper dup…

Okay, I’ll get this out of the way right now.  I’m a Linux Fanboy.  More than that, I love the Lizard.  Give me Suse or give me VMS  (that’s another story though).  I try to keep up to date on the Suse release cycle, getting to the newest release within a month or so of release.   I’ll give it a few months on the laptops then upgrade my work desktop.  Pretty much the only time I upgrade my personal server is when the distro is EOL.  Thankfully, the Suse Evergreen project, means I’m not trying to do scary major upgrades on a production server every 6 months.   As an aside,  I’m a huge fan of the Evergreen project.  Just before the Evergreen project was announced, I *almost* jumped ship to one of the Ubuntu LTS distros for some stability, thankfully however, I’m still green.  

Regardless, Suse 12.3 was recently released.  As has been my practice, I started on my 10 year old IBM R40.  I ran zypper dup and everything just worked, as usual.  No issues after about a week, so I said great, I’ll try it on my Lenovo T400!

I now have a Lenovo T400 that  hangs on boot at “Starting Login service”.  I’ve tried every trick in my bag to revive the install  to no avail.  I think I’m in it for a clean install on the T400, which is a disappointment.   

I will say, that this is the first Suse upgrade I’ve had go bad..ever.  I’ve been running it as my primary OS since 2006, version 10.2. I’ve run the distro upgrade on  my R40 countless times since 2006.  I’ve taken it from 10.2 -> 10.3 -> 11.1 ->11.2 ->11.2 -> 11.3 ->11.4 -> 12.2.   There were some pretty crumby versions out there (11.1 I’m looking at you!) and some awesomely stable ones 11.4 — so good, I skipped 12.1 all together. 

Posted in Linux | Tagged , | Leave a comment

Hello world!

Since this in the invocation of a new WordPress site, I thought I’d leave the Hello World title…at least until I get a real post up about stuff and finish getting the site laid out and what not.

Is it okay to feel dirty doing this in WP instead of Drupal?

Posted in /dev/random | Leave a comment