TheJach.com

Jach's personal blog

(Largely containing a mind-dump to myselves: past, present, and future)
Current favorite quote: "Supposedly smart people are weirdly ignorant of Bayes' Rule." William B Vogt, 2010

An interesting question

From this, which I didn't find interesting overall, I still found interesting for the question it asks at the start: "what are some things that you used to strongly believe but have now changed your mind about?" This is mainly in the context of coding, though it can be fun to answer that question for other things too.

First I'll touch on the areas that post touches on, then highlight a few others that are even more minor changes. You might want to skim that post first.

First, on Everyone is doing it wrong.

I do still believe basically everyone is "doing it wrong", but that includes myself. However I don't think there's a simple solution, or even a silver bullet, there are tradeoffs everywhere. While yes laziness, stupidity, greed, and immorality can all play a part in why things aren't better, there are other factors too.

My favorite computer book in the last few years was The Psychology of Computer Programming (buy a used hardcover). Part of the reason is its value in historical perspective -- it was written in 1971. It is fascinating to read and think about the problems it describes, and how decades later we are either still dealing with them or in what ways we resolved them. Even by 71, things had changed quite a bit from the 50s and 60s, and the author points those out. For instance, programmers used to have to queue in a single area to wait for their turn to run their program and get its results, because there was only one machine. This had a happy side effect of spontaneous collaboration and even design rework (since you can talk about your program to others before running it), but with the advent of timesharing and on-line systems, that spontaneity decreased and initial errors seemed to increase (though productivity improved anyway because of faster turnaround times).

Imagine we lived in a world where software development was dominated by a few giant companies with thousands of programmers and their practice involved owning only a few machines shared among all of them, and they had to print out their programs when they were ready, walk down to the nearest machine room, and have them scanned back in and, if any input was needed, have a separate stack of input a human operator would need to perform at various times.

And then little you, fresh out of college, find a startup that only employs 4 other programmers, but you each have your own machine, all to yourselves, you never need to print out your code unless you want to, you just type it into the machine itself, it compiles it and runs it, and you can enter any input you need yourself too!

Would you not be tempted to think that "everyone is doing it wrong?" And of course not excluding yourself, because you know there must be something even better than what you've got, but especially the guys in those giant companies are doing it wronger?

That's how I feel about a lot of today's software industry. And no, in that hypothetical world, the "simple" solution of just giving everyone their own PC wouldn't fix things overnight, entirely different practices need to be undertaken and explored.

I think this feeling has grown stronger over time. However I've also cared less about it. People are doing things wrong, but they're getting things done anyway, that's how humanity operates. We must also allow for change even when we think we're doing things pretty well, because perhaps there's a way to do better.

Also, I see hints of people doing things less wrong over time, in various ways, and can try to be content with small incremental advances even though of course it'd be nice to see bigger jumps.

I'm toeing the line to skipping to a later section, which is about programming languages, but for now I'll just say that to me the author's changed belief on this section isn't so much that he thought people were doing it wrong, it's that he thought things should be easy, and kept looking around for easy ways to get something done, instead of rolling up his sleeves and doing it. I can't say I've had the same belief, apart from very small hubristic "this seems easy" thoughts on my personal projects before I get into it for a couple days, realize it's not so easy, and am deflated. Work projects ended up going smoother, both for estimation and eventual execution. When things took longer than I expected, it wasn't typically because I was heads-down trying to solve a hard thing, but because I was out of psychic energy dealing with other stuff that comes with working at a big company and gets in the way of pure coding productivity.

On programming should be easy, I've never thought this. I've thought a related though, which I just went into, about "this shouldn't take this much time!" but when I see the reality, I'm typically deflated and just stop, rather than looking elsewhere for an easier way to do things. I like the article's lines here: "If you have a mountain of shit to move, how much time should you spend looking for a bigger shovel? ... At some point you have to shovel some shit." I think I'm pretty good at not spending 100% of my time looking for a bigger shovel (though sometimes I'll spend much more than 0%, sometimes not -- business constraints are a great limiter on biasing that effort towards 0%), but when I recognize that what I thought was a small pile of shit is actually a mountain, I'm deflated, and want to give up. Being paid is a great motivator to dig in and shovel as much shit as necessary with the tool you've got/are forced to use. If I have a belief change on this topic, it's that extrinsic motivation is very important. I've had many grandiose visions of side projects I could have been working on over the years, but sadly the best and most grandiose thing I've made is (and probably for a long while will be) my contributions over 5 years to a product at my last job for which I'll always brag that a book got written.

On The new thing, I've been fairly safe from this too. Part of it I think is that my curiosity is just lower, part of it is just that I've already seen a lot of stuff. And as a corollary of seeing a lot of stuff, I've also had a lot of predictions of my ultimate thoughts come true because I can predict things. For example, assuming I am in fact smart enough to grok Haskell (doubtful), and gave it an honest attempt, I still don't think I would be blown away because I predict the particular experience it gives, even though it would be "on steroids", is not one I actually enjoy. (That is, type-driven development -- having your types work for you, modeling your problems in terms of type theory, rather than types just being markers.) I've leveraged lesser static languages' type systems to do things like that, and I can't say I like it, though it can be very cool.

Mainstream ideas are mainstream for a reason. They are, almost by definition, the set of ideas which are well understood and well tested. We know where their strengths are and we've worked out how to ameliorate their weaknesses. The mainstream is the place where we've already figured out all the annoying details that are required to actually get stuff done. It's a pretty good place to hang out.

This goes back to the first issue, people are doing things wrong. I disagree with the above quote -- mainstream things are the set of ideas that are fashionable. Fashion, almost by definition, is poorly understood, otherwise fashion designers would find the One Fashion Design To Rule Them All and we'd be done with the endeavor. Meanwhile in software I've noticed in my own short time things come and go, and I can read history to see even more things that came and went. What has stuck around doesn't seem particularly that better understood than it used to be (for instance, OOP, unit testing).

Sticking it to the mainstream is sticking it to fashion. It's not necessarily wise, but it's also how new fashions can emerge. I do agree mostly with the article's point on "to sift through the bad ideas and nurture the good ones you have to already thoroughly understand the existing solutions." This might be a change in beliefs in myself over time, I'll get to examples later. But basically, to continue the fashion metaphor, if you're going to go your own way, it's worthwhile to know what exactly it is you're deviating from so you can compare the two more fairly, and not just jump around the search space randomly only comparing with what you've seen before.

I share a belief change with the author about the practicality of debuggers. I didn't think they were needed, either. Now I appreciate them, sometimes quite a lot. I'm not fully in the camp of "you MUST have a debugger", I do think many problems can be solved with pure reason, and when that fails augment it with print statements, but having experienced and made use of debuggers, I'm glad they're there when I would appreciate one and I'd never attempt a large project in a language without a good debugging story. To be clear, I don't even necessarily mean a line stepper, though of course that's useful in certain circumstances, what I'm really in love with is interactive change. (Languages.. we're getting there.)

Learning X will make you a better programmer

The author started with Haskell, so it's unsurprising to me that learning other X for programming languages wasn't that enlightening. It is a bit surprising he didn't get more from Clojure, that perhaps he'd get if he learned Common Lisp instead, but I still somewhat understand, Clojure's big thing for a lot of people isn't its lesser-lisp-iness, but its FP promotion of immutability and laziness without having to bang your head programming with types.

For me, I started with PHP. Later on I learned many languages, but I'll say that of the ones I've written significant code in (and paid for it, or at least was on the clock..) (PHP, JS, Python, C, C++, Java, AS3/Flex, Clojure, Common Lisp, SQL), each one has taught me something useful and I think improved me as a programmer.

PHP taught me the value of a domain specific language, however it seems that modern PHP is trying to make people forget, and the way people write it these days with their huge frameworks just makes it seem like a crappier Java.

JS (specifically via Node) taught me the value of asynchronous design. Expressing a producer-consumer architecture as a Node program that periodically checks an email inbox, on new data gets it and adds a bit of metadata and sticks it in S3 and puts a task id on the simple queue service, and a separate Node program that polls the queue for new tasks to work on, is really neat.

Python taught me so much, I don't know where to begin.

You get the point. So why do I think learning these things have improved me as a programmer, but the article author thinks his learning of new things hasn't for him? Maybe it's just time? The author admits to not spending enough time to gain fluency whereas I did gain fluency in the things I listed (and so have not listed things I didn't get fluent in, like Perl, or Ruby, or Scheme, though I've dabbled), and maybe you really need to get to that "fluent" level before you can really say you "learned" something. Shipping something in production just isn't good enough.

The author attributes working on problems that pushed the limits of his abilities as making him a better programmer than anything. For me, I don't have that experience. The hardest problem I've worked on that I utterly failed at (and proof exists on my github page...) did not improve me at all. And had I actually solved it, I can be confident that I wouldn't have improved either!

I do agree that doing more programming has helped make me a better programmer, but I think this deserves a caveat that one needs to be doing different kinds of programming. If all you do is webshit, or game engine shit, or game logic shit, or distributed systems shit, or database shit, or kernel shit, or CRUD shit, or webdriver test shit, or FPGA shit, or unit test shit, or gluing libs together shit, or... then you'll have a very narrow perspective of what programming is and can be. Every form of shit in this industry has its dignity and it's worth it to explore.

What has improved my programming most I think is working with other people. To be sure, a lot of this is having direct access to examples of what not to do. But taking e.g. just the single bullet of code review, of having to have someone else look at your shit and give feedback on it, of actually wanting feedback and finding coworkers willing to give it instead of a "meh, LGTM" non-review, that to me has been a great path of rapid improvement.

I'll agree with the author that learning lambda calculus is probably not that useful for becoming a better programmer. To the extent that automata theory is useful, I think most of its use is just becoming familiar with regex. Once upon a time in a high school programming contest, I had the misfortune of trying to tackle a problem that was trivially solvable by union-find (which I didn't know, and hence failed to solve the problem in the time allowed), but my teammate had to solve a problem trivially solvable with a regex (which he didn't know, but I did, but we didn't communicate and instead just banged our heads individually on our own problems). I've loved regexes ever since learning about them, I like to use them when I can, and I think they're incredibly useful. I got known as the "regex guy" by a bunch of people at my last job but despite that I still once introduced a O(2^N) bug. Maybe if I knew more automata theory I'd have avoided the mistake I made.

I don't know category theory, I'm surprised the author regrets it, but then again, if you already know and are good at Haskell, isn't category theory just an also-ran in your concepts?

I'll agree with the author that for many X that you can learn about, typically it's just useful for X and a few areas directly improved by knowing about X. I think knowing about regexes, knowing more data structures and algorithms, knowing hardware details, absolutely makes one a better programmer. More than being subject to code review? Well, if the reviewer doesn't know those things, probably not, but when the reviewer knows as much or more than you do, or just other things than you do, you create a virtuous cycle. Without that, though... I once saw a Principal level engineer on another team poo-poo someone's code review because they split up a long method into several package-private methods to aide readability and unit test writing, for the reason that "function calls are expensive". I had three initial thoughts: 1) This is not high-performance sensitive software 2) Measure it 3) This is Java, it's very likely that if this did matter at all, the calls would be inlined, and in fact inlining depends on having small methods because too much bytecode will make the JIT give up even trying. A fourth thought was the set of thoughts on why unit testing is good, which the Principal didn't share. Anyway, the code eventually got committed ignoring the comments of that Principal.

I agree with the author that opportunity cost is a killer, and breadth is probably the better bias to have. At the same time, there are things in the depths you won't find if you're biased towards breadth.

If I could go back and do it again, I'd ditch every language but Java (because that's what I got my money in) and Common Lisp (because that's the least wrong way to program). The things that have made me a better programmer over time I think could have been captured just fine by that combination alone.

On Intelligence trumps expertise, I think the first time I got that deflated feeling from realizing a problem was much harder than I thought, I understood this was not the case, if I ever believed it to be. I think I was also fortunate to read this early on: "Old age and treachery will beat youth and skill every time."

Still, I think intelligence is a better hiring filter for most kinds of software jobs. That Principal I mentioned before had many years in the field, but I don't think they'd score over 1 std dev on an IQ test. When I interviewed people myself, sometimes I wondered how they got their current roles, they seemed so off at the basic programming tasks I set them. Time and again I've seen that > 1 std dev and up are so much better than ~100, even when you give a bias of expertise. So much of intelligence is just speed, the smarter but less experienced person will catch up with the expert fast enough. Sometimes not fast enough to matter in a business environment, sure, so intelligence doesn't always trump expertise.

Many programmers like to play with and attempt things "from first principles". For me, this is about curiosity, and to see what I can come up with, not out of hubris that I may be smarter than everyone else. Knowing examples of the contrary (that is, one man or sometimes a small group of men slapping the collective faces of entire cottage farms of expertise and prior art from graphics engines to terminal emulators to truly huge-scale databases to...) is seductive. "I could be like that!" the delusional part of my brain thinks. Fortunately I know it's delusional, or if I fall for it anyway then once I realize the shit mountain in front of me I'll likely be deflated then.

Ok, I've rambled on forever in this post no one will read. I should actually answer the interesting question in more detail. What are some of my beliefs that have changed?

I think Common Lisp is the best way to develop software. The interactive nature is just the way things are meant to be. Having to write, save, compile, run -- find some issue, or some improvement, and close, write more, save, compile, and run again -- just sucks. This belief was sparked when I learned Python and got a taste of how useful an interactive REPL is, it grew stronger when I learned enough Clojure to write interesting things, and it became overwhelming once I learned enough CL to get a nice groove going.

However it was reinforced tremendously by improvements in other language ecosystems, namely Java. Java, and especially my use of it, has come a long way since I first learned it in AP Computer Science AB, using nothing but EditPad Lite to write my code, command line javac to compile it, command line java to run it (or in a few cases, applets). When writing, I didn't even indent my code at all (until the teacher threatened to dock points, then I indented one space). EditPad Lite didn't have syntax highlighting, its only benefit over Notepad was that it could have tabs of multiple files open at once, and you could save-as an extension without having to additionally deselect the 'text' filetype from a dropdown.

Later I learned vim, and in my job I had while doing college I did all my Java work in vim, but enjoying syntax highlighting and indentation. I learned about ant instead of javac. A few times I learned how to fire up Eclipse to debug something hairy, but I wouldn't use it for anything else, and I didn't need such power very often. In Clojureland, I found Slamhound, and integrated it with vim, it was nice.

In my last job, I used Eclipse all the time, only using vim for non-Java stuff. But more importantly, I lived in "debug mode". With the JRebel plugin, I could almost approach a Lisp-like way of developing software. You have incremental compilation, check, you can change things (many at least) while the program is still running, check, you can even set a breakpoint, change things, and resume, and maybe hit the new code. In the huge millions of lines of code I didn't have to remember the packages of everything, or even the file location of files, I could just let Eclipse find and complete things for me. I learned the value of auto refactorings, even though in Lisp you're less tightly coupled and need them less... I learned a lot at my last job, and a lot of it was thanks to Java.

This is something I think a lot of Lisp guys don't get -- just how far other ecosystems have caught up. Like yeah, it's super cool that in Lisp I can without ever closing my Maze program add in code for pathfinding, code to set start/end spots via clicking, press f to solve and draw a path, change the path to a silly neck thing and one-by-one handle the edge cases to draw the right neck segment... but have you ever used a game engine like Unreal, Unity, or Godot? While we Lisp programmers are sitting here proud we can define our tile world map in a Lisp list with our text editor and render it without closing the game, in game engine land they just drag-and-drop stuff onto an interactive scene, "run" it/parts of it when they want to test its dynamic nature, and in many ways have a better experience. Many, many years ago game devs had tools that let you pick a pixel on the screen and trace back to every piece of code that contributed to that pixel's value, such a useful debugging tool far outclasses CL's built-in trace function for the domain.

So I've kind of got a meta belief, that one day (perhaps in 10 years, perhaps less) I'll no longer want to program everything in CL, and CL's benefits will have finally been completely matched or surpassed by other ecosystems. We'll see how that turns out.

It's worth mentioning too that after high school I really disliked Java. By the time of my college job, I tolerated it. After my last job, I actually kinda like a lot of it, and would prefer it nowadays for many types of software that I would previously have only thought worthwhile to do in Python or perhaps Node, maybe PHP. I've only grown less enamored with Clojure the more CL I learn and get used to, though in terms of being on the JVM and working with existing Java code, Clojure still beats ABCL by quite a bit.

I've changed my mind about symbol names, in that long names are fine, and especially fine if you have some sort of auto-complete.

I've changed my mind about other stylistic things, namely that they mostly don't matter, the only thing I'll push for in a review is local consistency.

Long ago I changed my mind about assembly language, realizing that just x86 sucks, PIC and ARM aren't bad.

I go back and forth on stored procedures in sql databases.

TDD has value, though it's mostly in having unit tests at all, not the order you write them.

I don't need to understand every part of something in order to contribute and be productive. Though at the same time, you do need to be "that guy" who goes deep if you want to fix fundamental flaws. They won't fix themselves.

A picture is worth more than a thousand words, even to programmers, because nobody fucking reads.

Even in traditions I would have no interest joining, there can be value found.

Hierarchy beats anarchy, anarchy beats democracy.

Assholes in a fundamental sense exist, not just the sense that everyone is an asshole sometimes, and it literally pays to identify them and not get on their bad side.

Code review is probably the best intervention that can be made to improve software quality. At the same time, it carries a speed tradeoff. Being able to form cliques of trusted "async reviews" or "this is small/trivial, trust me" can mitigate this, but beware that this is mitigation by bypassing, and left unchecked can ruin the gains.

This is just turning more ranty, I'll stop here...


Posted on 2021-09-30 by Jach

Tags: programming, rant

Permalink: https://www.thejach.com/view/id/383

Trackback URL: https://www.thejach.com/view/2021/9/an_interesting_question

Back to the top

Back to the first comment

Comment using the form below

(Only if you want to be notified of further responses, never displayed.)

Your Comment:

LaTeX allowed in comments, use $$\$\$...\$\$$$ to wrap inline and $$[math]...[/math]$$ to wrap blocks.