Haskell: The Bad Parts

Complete series

  1. Haskell: The Bad Parts, part 1
  2. Haskell: The Bad Parts, part 2

Haskell: The Bad Parts, part 1

There’s a popular book called “JavaScript: The Good Parts.” And there’s a common meme around the relative size of that book versus “JavaScript: The Definitive Guide.”

Haskell is in my opinion a far more well designed and coherent language than JavaScript. However, it’s also an old language with some historical baggage. And in many ways it’s a bleeding edge research language that sometimes includes… half-baked features. And due to an inconsistent set of rules around backwards compatibility, it sometimes will break code every six months, and sometimes keep strange decisions around for decades.

True mastery of Haskell comes down to knowing which things in core libraries should be avoided like the plague.

* foldl
* sum/product
* Data.Text.IO
* Control.Exception.bracket (use unliftio instead, handles interruptible correctly)

Just as some examples

— Michael Snoyman (@snoyberg) October 27, 2020

After a request and some tongue-in-cheek comments in that thread, I decided a longer form blog post was in order. I’m going to start off by expanding on the four examples I gave in that tweet. But there are many, many more examples out there. If there’s more interest in seeing a continuation of this series, please let me know. And if you have pet peeves you’d like me to address, input will be very welcome.

What is a “bad part”

Very rarely is there such a thing as a language feature, function, type, or library that is so egregiously bad that it should never, ever be used. Null is of course the billion dollar mistake, but it’s still incredibly useful in some cases. So when I say that something is a “bad part” of Haskell, I mean something along these lines:

There’s a large tendency in the Haskell community to be overly literal in responding to blog posts. Feel free to do that to your heart’s content. But this caveat serves as a word of warning: I’m not going to caveat each one of these with an explanation of “yes, but there’s this one corner case where it’s actually useful.”

Why attack Haskell?

Since I’m a Haskeller and advocate of the language, you may be wondering: why am I attacking Haskell? I don’t see this as an attack. I do wish we could fix these issues, and I think it’s a fair thing to say that the problems I’m listing are warts on the language. But every language has warts. I’m writing this because I’ve seen these kinds of things break real world projects. I’ve seen these failures manifest at runtime, defeating yet again the false claim that “if it compiles it works.” I’ve seen these become nefarious time bombs that disincentivize people from ever working with Haskell in the future.

I hope by calling these out publicly, I can help raise awareness of these problems. And then, either we can fix the problems at their source or, more likely, get more widespread awareness of the issue.

Also, because it feels appropriate, I’m going to take a more jovial tone below. I personally find it easier to beat up on a language I love like that.


Duncan Coutts already did this one. foldl is broken. It’s a bad function. Left folds are supposed to be strict, not lazy. End of story. Goodbye. Too many space leaks have been caused by this function. We should gut it out entirely.

But wait! A lazy left fold makes perfect sense for a Vector! Yeah, no one ever meant that. And the problem isn’t the fact that this function exists. It’s the name. It has taken the hallowed spot of the One True Left Fold. I’m sorry, the One True Left Fold is strict.

Also, side note: we can’t raise linked lists to a position of supreme power within our ecosystem and then pretend like we actually care about vectors. We don’t, we just pay lip service to them. Until we fix the wart which is overuse of lists, foldl is only ever used on lists.

OK, back to this bad left fold. This is all made worse by the fact that the true left fold, foldl', is not even exported by the Prelude. We Haskellers are a lazy bunch. And if you make me type in import Data.List (foldl'), I just won’t. I’d rather have a space leak than waste precious time typing in those characters.

Alright, so what should you do? Use an alternative prelude that doesn’t export a bad function, and does export a good function. If you really, really want a lazy left fold: add a comment, or use a function named foldlButLazyIReallyMeanIt. Otherwise I’m going to fix your code during my code review.


The sum and product functions are implemented in terms of foldr. Well, actually foldMap, but list’s foldMap is implemented in terms of foldr, and lists are the only data structure that exist in Haskell. “Oh, but foldr is the good function, right?” Only if you’re folding a function which is lazy in its second argument. + and * are both strict in both of their arguments.

If you’re not aware of that terminology: “strict in both arguments” means “in order to evaluate the result of this function/operator, I need to evaluate both of its arguments.” I can’t evaluate x + y without knowing what x and y are. On the other hand, : (list cons) is lazy in its second argument. Evaluating x : y doesn’t require evaluating y (or, for that matter, x). (For more information, see all about strictness.)

“But wait!” you say. “What if I have a custom data type with a custom typeclass instance of Num that has a custom + and/or * that is in fact lazy in the second argument! Then sum and product are perfect as they are!”

That’s true. Now go off and write your own lazySum and lazyProduct. 99 times out of 100, or more likely 999,999 times out of 1,000,000, we want the fully strict version.

“But it doesn’t matter, GHC will optimize this away.” Maybe. Maybe not. Stop relying on GHC’s optimizer to convert horribly inefficient code into not efficient code. (But I digress, we’ll talk about why the vector package is bad another time.)


I’ve already covered this one once before when I told everyone to beware of readFile. In that blog post, I talk about a bunch of String based I/O functions, especially the titular readFile, which is obnoxiously exported by Prelude. Those are bad, and I’ll reiterate why in a second. But Data.Text.IO is arguably far worse. The reason is that there’s pretty good awareness in the community that String-based I/O is bad. Even though the String part is the least of our worries, it does a good job of scaring away the uninitiated.

But Data.Text.IO is a wolf in sheep’s clothing. We’re all told by people who think they can tell people how to write their Haskell code (cough me cough) that we should exorcise String from our codebases and replace it in all cases with Text. Attacking the Text type is a topic for another time. But the problem is that by cloaking itself in the warm embrace of Text, this module claims more legitimacy than it deserves.

The only module worse in this regard is Data.Text.Lazy.IO, which should be buried even deeper.

OK, what exactly am I on about? Locale sensitive file decoding. It’s possible that this has been the number one example of a Haskell bug in the wild I’ve encountered in my entire career. Not the spooky memory leak. Partial functions like head randomly throwing exceptions are up there, but don’t quite rise to prominence.

You see, when you are dealing with file formats, there is typically an actual, defined format. YAML, XML, JSON, and many others give a lot of information about how to serialize data, including character data, into raw bytes. We want to be consistent. We want to write a file in one run of the program, and have it read in a separate run. We want to write the file on a Windows machine and read it on a Linux machine. Or we want to interact with programs in other languages that read or write data in a consistent format.

Locale sensitive file encoding and decoding laughs in our face. When you use Data.Text.IO.readFile, it plays a mind reading game of trying to deduce from clues you don’t care about which character encoding to use. These days, on the vast majority of systems used by native English speakers, this turns out to be UTF-8. So using readFile and writeFile typically “just works.” Using functions from Data.Text.IO looks safe, and can easily get hidden in a large PR or a library dependency.

That’s when all hell breaks loose. You ship this code. You run it in a Docker container. “Oops, you forgot to set the LANG env var, Imma crash.” But it’s worse than that. Typically things will work well for weeks or months, because it can often be a long time before someone tries to encode a non-ASCII character.

The same kind of thing happens regularly to Stack. Someone adds a new feature that writes and reads a file. The code passes all integration tests. And then someone in Russia with a weird Windows code page set and a Cyrillic character in their name files a bug report 2 years later about how they can’t build anything, and we sheepishly tell them to run chcp 65001 or build in c:\.

Friends don’t let friends use Data.Text.IO.

“Oh, but putStrLn is fine!” Yeah, maybe. It’s also potentially slow. And it will throw a runtime exception due to character encoding mismatches. Just use a good logging library. That’s why we have one in rio.

EDIT Since so many people have asked: instead of readFile, I recommend using readFileUtf8, which is available from rio.


This is by far the least objectionable of the bad things in this list. I included it because the entire original tweet was inspired by a coworker telling me about a bug he ran into because of this function.

Async exceptions are subtle. Very, very subtle. Like, super duper subtle. I’ve devoted a large percentage to my Haskell teaching career towards them. Async exceptions are a concept that don’t truly exist in most other languages. They require rewiring the way your brain works for proper handling. Helper functions help alleviate some of the pain. But the pain is there.

Then someone said, “You know what? Async exceptions aren’t subtle enough. Let’s invent two different ways of masking them!”

Wait, what does masking mean? Well, of course it means temporary blocking the ability to receive an async exception. Totally, 100% blocks it. It’s like async exceptions don’t exist at all. So you’re totally 100% safe. Right?

Wrong. Masking isn’t really masking. Masking is kinda-sorta masking. No, if you really want protection, you have to use uninterruptibleMask. You knew that, right? Of course you did, because it’s so incredibly obvious. And of course it’s painfully obvious to every single Haskeller on the planet just how important it is to choose normal mask versus uninterruptibleMask. And there’d never be a disagreement about these cases.

In case the tone isn’t clear: this is sarcasm. Interruptible vs uninterruptible masking is confusing. Incredibly confusing. And nowhere is that more blatant than in the Control.Exception.bracket function. Interruptible masking means “hey, I don’t want to receive any async exceptions, unless I’m doing a blocking call, then I totally don’t want to be masked.” And Control.Exception.bracket uses interruptible masking for it’s cleanup handler. So if you need to perform some kind of blocking action in your cleanup, and you want to make sure that you don’t get interrupted by an async exception, you have to remember to use uninterruptibleMask yourself. Otherwise, your cleanup action may not complete, which is Bad News Bears.

This is all too confusing. I get the allure of interruptible masking. It means that you get super-cool-looking deadlock detection. It’s nifty. It’s also misleading, since you can’t rely on it. Really good Haskellers have released completely broken libraries based on the false idea that deadlock detection reliably works. It doesn’t. This is a false sense of hope, much like rewrite rules for stream fusion.

I’m not putting mask on the “bad parts” list right now, but I’m tempted to do so, and claim that uninterruptibleMask should have been the default, and perhaps only, way of masking. (Reminder for later: throw is a horribly named function too.) But I am saying that bracket defaulting to interruptible masking is a mistake. It’s unexpected, and basically undocumented.

In unliftio (and therefore in rio) we provide an alternative bracket that uses uninterruptible masking. I debated this internally quite a bit, since I don’t generally like throwing new behavior into an old function name. I eventually agreed with the idea that the current bracket implementation is just buggy and should be fixed. I still feel a bit uneasy about the decision though. (Note that I made the opposite decision regarding sum and product and included the broken versions, which I also feel uneasy about.)

Credits for this one go to Yuras and Eyal, check out this Github issue for details.


I’ll reiterate: Haskell is a great language with some warts. Ideally, we’d get rid of the warts. Second to that: let’s be honest about the warts and warn people away from them.

If you’d like to see more posts in this series, or have other ideas of bad parts I should cover, please let me know in the comments below or on Twitter.

Haskell: The Bad Parts, part 2

If you didn’t see it, please check out part 1 of this series to understand the purpose of this. Now, for more bad parts!

Partial functions (in general)

Laziness very likely belongs in this list. My favorite part of criticizing laziness is how quickly people jump to defend it based on edge cases. So let’s be a bit more nuanced before I later get far less nuanced. Laziness is obviously a good thing. Strictness is obviously a good thing. They also both suck. It depends on context and purpose. Each of them introduce different kinds of issues. The real question is: what’s a more sensible default? We’ll get to that another time.

I called this section partial functions. Am I having a senior moment? Maybe, but I intentionally started with laziness. In a strict language, function calls can result in exceptions being thrown, segfaulting occurring, or panicking. (And if I write a “Rust: The Bad Parts”, believe me, I’ll be mentioning panicking.) The fact that a function acts like it can successfully perform something, but in fact fails in a predictable way (like failing a HashMap lookup), it should be reflected at the type level. If not, ya dun goofed.

Also, if you have a language that doesn’t let you reflect this information at the type level: ya dun goofed.

Partial functions are the antithesis of this concept. They allow you to say “yeah dude, I can totally give you the first value in an empty list.” Partial functions are like politicians: you can tell they’re lying because their lips are moving. (“But Michael,” you say. “Functions don’t have lips!” Whatever, I’m waxing poetical.)

Alright, so plenty of languages screw this up. Haskell tells those languages “hold my beer.”

Haskell screws up partial functions way, way worse than other languages:

  1. It promotes a whole bunch of them in the standard libraries and Prelude.
  2. Some libraries, like vector (I’m getting to you, don’t worry) make it really confusing by providing an index and unsafeIndex function. Hint: index isn’t really safe, it’s just less unsafe.
  3. There’s no obvious way to search for usages of these partial functions.
  4. And, by far, the worst…

Values are partial too!

Only in a lazy language does this exist. You call a function. You get a result. You continue working. In any other non-lazy language, that means you have a value. If I have a u32 in Rust, I actually have a u32 in Rust. Null pointers in languages like C and Java somewhat muddy this situation, but at least primitive types are really there if they say they’re there.

No, not Haskell. x :: Int may in fact not exist. It’s a lie. let x = head [] :: [Int] is a box waiting to explode. And you find out much later. And it’s even worse than that. let alice = Person { name = "Alice", age = someAge } may give you a valid Person value. You can evaluate it. But Cthulhu help you if you evaluate age alice. Maybe, just maybe, someAge is a bottom value. Boom! You’ve smuggled a dirty bomb out.

I’m not advocating for removing laziness in Haskell. In fact I’m not really advocating for much of anything in this series. I’m just complaining, because I like complaining.

But if I was to advocate some changes:

But ackshualllly, infinite loops

Someone’s gonna say it. So I’ll say it. Yes, without major language changes, you can’t prevent partial functions. You can’t even detect them, unless Turing was wrong (and I have my suspicions.) But Haskell community, please, please learn this lesson:


We can get rid of many of the most common partial functions trivially. We can detect many common cases by looking for partial pattern matches and usage of throw (again, horribly named function). “But we can’t get everything” doesn’t mean “don’t try to get something.”


Given what I just said, we Haskellers have a lot of hubris. Each time you say “if it compiles it works,” a thunk dies and collapses into a blackhole. We’ve got plenty of messes in Haskell that don’t sufficiently protect us from ourselves. The compiler can only do as good a job as our coding standards and our libraries allow.

“But Haskell’s at least better than languages like PHP.” I mean, obviously I agree with this, or I’d be writing PHP. But since I’m being ridiculously hyperbolic here, let me make a ridiculous claim:

PHP is better than Haskell, since at least you don’t get a false sense of security

- Michael Snoyman, totally 100% what he actually believes, you should totally quote this out of context

I’ve said this so many times. So I’ll say it again. Using a great language with safety features is one tiny piece of the puzzle.

There are so many ways for software to fail outside the purview of the type system. We’ve got to stop thinking that somehow Haskell (or, for that matter, Rust, Scala, and other strongly typed languages) are some kind of panacea. Seriously: the PHP people at least know their languages won’t protect them from anything. We should bring some of that humility back to Haskell.

Haskell provides me tools to help prevent certain classes of bugs, so I can spend more of my time catching a bunch of other bugs that I’m absolutely going to write. Because I’m dumb. And we need to remember: we’re all dumb.

More partial functions!

You know what’s worse than partial functions? Insidiously partial functions. We’ve all been screaming about head and tail for years. My hackles rise every time I see a read instead of readMaybe. I can’t remember the last time I saw the !! operator in production code.

But there are plenty of other functions that are just as dangerous, if not more so. More dangerous because they aren’t well known to be partial. They are commonly used. People don’t understand why they’re dangerous. And they fail only in edge cases that people aren’t thinking about.

Exhibit A: I present decodeUtf8. (Thanks Syd.)

Go ahead, search your codebase. Be dismayed that you’ve found it present.

What’s wrong with decodeUtf8? As we established last time, character encoding crap breaks stuff in production. UTF-8 works about 99% of the time, especially for people in Western countries. You’ll probably forget to even test for it. And that function looks so benign: decodeUtf8 :: ByteString -> Text.


This function is a ticking time bomb. Use decodeUtf8' (yes, it’s named that badly, just like foldl') and explicitly handle error cases. Or use I/O functions that explicitly handle UTF-8 decoding errors and throw a runtime exception.

“I can’t believe Michael still thinks runtime exceptions are a good idea.” I’ll get to that another time. I don’t really believe they’re a good idea. I believe they are omnipresent, better than bottom values, and our least-bad-option.

Law-abiding type classes

Now I’ve truly lost it. What in tarnation could be wrong with law-abiding type classes? They’re good, right? Yes, they are! The section heading is complete clickbait. Haha, fooled you!

There’s a concept in the Haskell community that all type classes should be law-abiding. I’ve gone to the really bad extreme opposing this in the past with early versions of classy-prelude. In my defense: it was an experiment. But it was a bad idea. I’ve mostly come around to the idea of type classes being lawful. (Also, the original namespacing issues that led to classy-prelude really point out a much bigger bad part of Haskell, which I’ll get to later. Stay tuned! Hint: Rust beat us again.)

Oh, right. Speaking of Rust: they do not believe in law-abiding type classes. There are plenty of type classes over there (though they call them traits) that are completely ad-hoc. I’m looking at you, FromIterator. This is Very, Very Bad of course. Or so my Haskell instincts tell me. And yet, it makes code Really, Really Good. So now I’m just confused.

Basically: I think we need much more nuanced on this in the Haskell community. I’m leaning towards my very original instincts having been spot on. So:

This isn’t exactly in line with a “bad part” of Haskell. Up until now I’ve been giving a nuanced reflection on my journeys in Haskell. Let me try something better then. Ahem.


I’m staring at you, Eq Double. No, you cannot do equality on a Double. (And thanks again to Syd for this idea.) Rust, again, Got It Right. See PartialEq vs Eq. Floating point values do not allow for total equality. This makes things like Map Double x dangerous. Like, super dangerous. Though maybe not as dangerous as HashMap Double x, which deserves its own rant later.

So come down from your high horses. We don’t have law abiding type classes. We have “if I close my eyes and pretend enough then maybe I have law abiding type classes.”

Unused import warnings

Haskell has a dumb set of default warnings enabled. (“I think you mean GHC, one implementation of Haskell, not Haskell the language itself.” Uh-huh.) How can we not generate a warning for a partial pattern match? Come on! ADTs and pattern matching is the killer feature to first expose people to. And it’s a total lie: the compiler by default doesn’t protect us from ourselves.

So of course, we all turn on -Wall. Because we’re good kids. We want to do the right thing. And this, of course, turns on unused import warnings. And because each new release of GHC and every library on Hackage likes to mess with us, we are constantly exporting new identifiers from different modules.

The amount of time I have spent adding weird hacks to account for the fact that <> is suddenly exposed from Prelude, and therefore my import Data.Monoid ((<>)) isn’t necessary, is obscene. The introduction of fiddly CPP to work around this sucks. In fact, this all sucks. It’s horrible.

I didn’t realize how bad it was until I spent more time writing Rust. It reminded me that I never had these problems back as a Java developer. Or C++. This has been a Haskell-only problem for me. Maybe I’ll get into this later.

Side note: I’m trying to avoid turning this series into “Rust is better than Haskell.” But the fact is that many of the problems we face in Haskell don’t exist in Rust, for one reason or another. This specific issue is due to a better crate/module system, and object syntax. As long as we’re getting our cards on the table, I also think Rust demonstrates what a good freeze/unfreeze story would look like (I’m looking at you Map and HashMap), what a good standard library would be, and what a regular release schedule with backwards compat should look like. Oh, and of course good community processes.

The vector package

Did I just mention that Rust can show us what a good standard library is? Yes I did! I’m only going to begin to scratch the surface on how bad vector is here. It’s really bad. vector is at the level of being just bad enough that no one wants to use it, but being just barely serviceable enough and just wide spread enough that no one wants to replace it. There are two Haskellers I deeply trust who have taken on efforts to do just that, and even I haven’t moved over.

Firstly, the vector package is a package. It shouldn’t be. Packed arrays should be in the standard library. Rust got this right. Vec is completely core to the language. And it ties in nicely with built-in syntax for arrays and slices, plus the vec! macro. Haskell’s got lists. You can turn on OverloadedLists, but I don’t know if anyone does. And besides, you’ve gotta reach outside of base to get a Vector.

vector is slow to compile. Dog slow. Sloth-recovery-from-Thanksgiving-dinner slow. I’m peeved by this right now because I had to compile it recently.

vector seems to have a litany of runtime performance issues. I haven’t tested these myself. But people regularly complain to me about them. Enough people, with enough Haskell experience, that I believe them. (And text: you’re in this list too.)

Oh, right, text. vector is completely different from bytestring and text. And there’s the pinned-vs-unpinned memory issue that screws things up. I’m unaware of any other language needing to make that distinction. (If there are examples, please let me know. I’d love to read about it.)

Stream fusion is dumb. I mean, it’s not dumb, I love it. “Stream fusion should be a default when performing array operations so that you magically mostly fuse away intermediate buffers but sometimes it doesn’t work because rewrite rules are really fiddly and then my program consumes 8GB of memory whoopsie daisy” is dumb. Really, really dumb.

The API for vector is lacking. Sure, for what it does, it’s fine. But every time I use .push() in Rust, I’m reminded that it could and should be better. I don’t want to work with lists all the time. I want to have a mutable vector API that I’m happy to use. I want to sprinkle more runST throughout my code. I want phrases like “freeze” and “thaw” to be common place, much like mut is regularly used in Rust.

Oh, and there’s no such thing as a strict, boxed vector. Boo.

So in sum:

“Michael, you’re not being very nice to the maintainers of this library.” I mean these comments in complete respect. I’ve advocated for many of the things I’m now saying are bad. We learn new stuff and move on. If I had a vision for how to make vector better, I’d propose it. I’m just airing my concerns. I have a vague idea on a nicer library, where you have a typeclass and associated type determining the preferred storage format for each type that can be stored, growable storage, an easy-to-use freeze/thaw, minimal dependencies, quick compile, and associated but separate stream fusion library. But I think working on it would be like adding a new standard, so I’m not jumping into the fray.

Next time…

I’ve dropped plenty of hints for future parts in this series. But I’d really love to hear ideas from others. Thanks to Syd for providing some of the fodder this round. And thanks to a number of people for mentioning partial functions.

I kind of think I owe some attacks on async exceptions. Stay tuned!


Someone’s gonna get upset about my Turing comment above. No, I’m not challenging Turing on the halting problem. I only wanted to imply I was for poops and giggles.