Overruled and sustained. Your objection is noted, but get back to the matter at hand, counselor, or I will see to it that you are disbarred in every state except this one. What's that? I'm already done? You see how fast I work, counselor. Don't fuck with me.

...Ok, enough silliness. What I want to talk about is object-oriented programming, or more specifically why I employ the object-oriented paradigm when I do. Let's start with some example code.

Our assignment...

Given an array of integers and a target sum, return the indices of two values found in the array which, when added together, equal the target sum.

Did that make sense? I hope so; I'm too lazy to go look up the original description. One caveat is that we can't use the same index twice, so, that in mind, here is some simple-ish code that solves the problem:

fn find(target: u32, values: &[u32]) -> Option<(usize, usize)> {
    if values.len() < 2 {
        return None;
    }

    for (left_idx, &left) in values.iter().enumerate() {
        for (right_idx, &right) in values.iter().enumerate() {
            if left_idx == right_idx {
                continue;
            }

            if target == (left + right) {
                return Some((left_idx, right_idx));
            }
        }
    }

    None
}

For any array with less than two values, we immediately return None because there can be no valid solution. For any longer array, we do a brute force search of every valid combination of indexen. I'm sure this isn't optimal. I'm also sure idgaf. What I do care about is this: it didn't occur to me to write this code.

Here's what I wrote instead:

struct BruteForceSearch<'data> {
    data: &'data [u32],
}

impl<'data> BruteForceSearch<'data> {
    fn new(data: &'data [u32]) -> Self {
        Self { data }
    }

    fn search(&mut self, target: u32) -> Option<(usize, usize)> {
        if self.data.len() < 2 {
            return None;
        }

        for (left_idx, &left) in self.data.iter().enumerate() {
            for (right_idx, &right) in self.data.iter().enumerate() {
                if left_idx == right_idx {
                    continue;
                }

                if target == (left + right) {
                    return Some((left_idx, right_idx));
                }
            }
        }

        None
    }
}

As you can see, the difference is exclusively found in the definition of a struct to encapsulate this algorithm and the definition of a constructor that allows us to build the struct based on any appropriate data. To wit, this code is basically an unmitigated waste of ink. Or bytes. Whichever. So why did I write it?

Why do I do what I do?

In cases, it's more revealing to talk about how than why, because humans often have no explicit reason for doing what we do. In many cases, it's a question of the structure of the processes involved. For instance, I often employ what one might term an incremental analysis.

"Incremental analysis"

The title "incremental analysis" sounds better, but I usually refer to this as "thinking with my fingers." That is, I start writing code in order to provide myself with a framework as I mentally explore the problem space. The original version of the struct included left and right pointers and was intended to support an incremental search in case you didn't want to perform every step at once, but I wound up dropping that idea. Nonetheless, I kept the struct; I didn't really see a reason not to.

Such an analysis is not driven by logic or reasoning but by heuristics—by past experience. As such, it is heavily influenced by this second item: the cognitive default.

Cognitive defaults

What do we usually do to solve a given problem? Well, that depends on the kinds of problems we generally face and on the constraints normally applied to their solutions. In my case, I generally face problems related to business logic, and they are often solved within a pretty enterprisey (that's a word, at least as far as I'm concerned) framework.

Within such a framework, requirements almost always include things like "configurable," or "SOLID," or "DRY," and it's usually considered to be of vital importance that the logic be mockable or injectable or that the interface be in some way abstracted from its implementation... In other words, our defaults practically always require some kind of object orientation. So, you could do this...

trait Search {
    fn search(&mut self, target: u32) -> Option<(usize, usize)>;
}

impl<'data> Search for BruteForceSearch<'data> {
    fn search(&mut self, target: u32) -> Option<(usize, usize)> {
        self.search(target)
    }
}

Of course, we could also just define a delegate (...or whatever you want to call this) describing our search:

type SearchDelegate = Fn(u32, &[u32]) -> Option<(usize, usize)>

...We could say that'd be a lot more convenient, except that there are some annoying lifetime issues surrounding this and it's not even currently possible; type aliases for traits are an in-progress feature and don't even have a nightly implementation yet. But anyway...

It's also just not necessarily compatible with some of the other things we might want to do in a large system. A lot of these things are hypotheticals based on other hypotheticals, so they sound silly, but what if we want to be able to change the way we do X at some later date? A common problem domain for that is password hashing.

struct PasswordHasherFactory;

impl PasswordHasherFactory {
    fn get(version: Version) -> Box<Hasher> {
        match version {
        ...
    }
}

Now it becomes clear that returning a trait object gives us some significant advantages over returning a delegate. Each implementation of the trait object, for instance, could expose a differnet output type. One hash might be a string, while another might be some box of bytes. Another might even be lazy. The sky is the limit, right?

...In short, our usual is influenced by our usual problem, and the fact that your usual solution is heavily object oriented doesn't necessarily imply the existence of extensive brain damage.

All right, fine, but what's the cost?

In C# and a lot of other languages, the primary cost is additional garbage that has to be dealt with. At scale, this can become an issue. But here's the cost in Rust:

scratch [master●] cargo bench
    Finished release [optimized] target(s) in 0.0 secs
     Running target/release/deps/scratch-3b5c352413664514

running 2 tests
test bench_fn       ... bench:          46 ns/iter (+/- 16)
test bench_searcher ... bench:          46 ns/iter (+/- 12)

...Which is to say, "negligible." The compiler doesn't see the difference between the two options, so don't feel too bad if the first thing you reach for is an object of some kind instead of a direct, imperative solution. Hell, I would even make the argument that some level of abstraction makes code easier to read.

war