Interviews I've Gone and Done
I’ve been a full-time software engineer for about 5 years. I’m currently in my second full-time engineering job. I’ve interviewed at dozens of companies and have been given about four job offers. Over the last 6 or 7 years I’ve come across a lot of interview questions. Sometimes I’ve enjoyed the puzzles, sometimes it was like a sentient “cracking the coding interview” was making fun of me. Some interviews were near-misses (I think) and others made me question my choice of career.
Here is a retrospective of questions that have stood out to me, organized in no particular way and with inconsistent commentary.
This first section is about (some of) the questions I’ve faced.
Real-time (in-person/over the phone/video chat)
These are questions that I’ve been asked in a “real-time” interview. Things that fall under this category are on-site interviews, phone calls, and video calls.
let’s talk about [some code they’ve presented me with]
This is generally a somewhat recent revision or simplified piece of production code. Through no coincidence, these samples contain entrypoints into a number of different topics. One example is some Python code that handled signals from the operating system. Even though it was Python (i.e. - relatively “high-level”) code, the interviewer and I still discussed topics like concurrent programming and Linux syscalls.
[given a service architecture], you receive the following alert: [insert alert here], what do you do?
This gave me a chance to demonstrate my troubleshooting (aka “firefighting”) capabilities. I had everything that I thought a “reasonable” production system might have at my fingertips. It felt like a choose-your-own-adventure production emergency. Can I look at the logs? Yes, and they say [something potentially useful]. Do we have graphs of response codes? Yes, and here’s what it looks like: [jpg of a graph with 5xx errors going up and to the right].
This probably isn’t something you could expect a new grad (bootcamp or bachelors or otherwise) to know, but could hopefully be asked of anyone with more than a year or so of experience. It felt like one of those old text-based RPGs, like Adventure.
Some kind of SQL question that involved common table expressions, but I didn’t get a job offer so who knows if it was actually supposed to involve common table expressions?
This was something like “given a table with columns
parent_name, find all grandparents”. Brutal.
Of all my interviews, only a small handful actually contained SQL questions. I’m a little surprised at how rarely I get asked about SQL in interviews, given how useful it’s been to me. I guess it’s a little like how companies will include “proficiency with the command-line” in a job description but will never, ever follow-up on it.
Exploring a server
But some companies do follow up on it! This was basically “you’re on a server and want to find out ____, how do you do it?” While the interviewer asked for specific commands and flags, they could definitely relate when I responded with “I don’t think I need [random assortment of flags] but it’s what I’ve always used and it’s worked so far”
I feel these usually come out when a company tries to come off as having “conversational” interviews when really it just wants to admit that its interviews are quizzes
- What’s the difference between an inner and outer join?
- [fill out this table of REST keywords and whether or not they’re idempotent (plus some other properties but I can’t remember)]
- What’s map/reduce?
These exercises were typically given to me with the expectation that they’d take anywhere between 2 and 10 hours. I inevitably exceeded those time frames.
Take-home exercises have at least one very clear upside: they reduce the anxiety of an interview by making it asynchronous. There’s no one looking over your shoulder, interrupting your thought process by peppering you with questions while you’re trying to think of variable names.
In my opinion, they also have a less-clear downside: it’s harder for interviewers to get feedback on the question. Interviewers will still be able to build up an intuition about how various folks answer a problem, but there’s no reliable way to know how long something took to complete. They have no idea how often someone gets hung up on an awkwardly-worded requirement.
I’m not sure I’ve ever had a follow-up conversation about a code sample, which is disappointing. Seems like it’d be a good opportunity for the applicant to get some useful feedback, and for the hiring team to hopefully learn a bit more about how their questions are being perceived.
With that said, here are some take-home exercises I’ve gotten over the years
Here is some code related to [technical thing], please tell us how it works and what you might infer about how we handle [technical thing]
Similar to the first “real-time” question, except that this was done without someone looking over my shoulder and also I had to give a little more umprompted feedback. I had to be proactive. At times it felt like writing down certain things and then asking myself “am I reading into this too much?”
Write a Terraform script to spin up an environment in AWS with [components]
I forget what the “components” were, but the exercise was to see how one would go about writing a Terraform script (configuration? I don’t know what the right Terraform name is) to spin up some AWS resources. It was for a DevOps role and, I imagine, very closely mirrored what the role would be like.
Write a TCP server in Scala
If I recall correctly, the full prompt was to write a server that could listen on a socket and parse messages in a custom protocol built atop TCP. I replied a day later to withdraw my application.
Interview questions and process efforts
I think most people who have ever interviewed, for engineering positions or otherwise, have stories of companies that never replied after an interview, or who strung them along for a while before finally saying no, or whose process was filled with contextless hypotheticals that clearly didn’t reflect the kind of work the role required. This section is for two examples of the other end of the spectrum.
- Private AWS account
- There were limits to what I could do, but at the start of a take-home exercise I was provided a login and allowed to test my code against a real AWS environment. This set the company apart as it was a small company with dedicated and competent AWS administrators. That’s not a guarantee, and “cloud infrastructure” can be absolutely soul-crushing if done without backup.
- Slack access
- There were limits to what I could do, but I was given access to a private room, DM access to the interview team, and access to the organizations custom Slack emojis.
- The company that did this was almost entirely distributed, so seeing how someone can communicate over Slack was probably a legitimate test of “can I work with this person?” for them.
Both of these companies blew me away with how much effort they put into putting their candidate in a practical scenario.
I don’t think I’m any better at interviewing because of these experiences. The people who’ve “gotten it right”, in my opinion, have been those who set a clear expectation for proficiency in certain areas and test exactly that. I’ve also found that I enjoy speaking with interviewers who have seemingly mapped out the decision tree of their questions and can tell when I’m on the wrong branch. I might not have gotten the job, but at least I didn’t hate myself after the interview. That’s a low bar to clear, but not everyone has cleared it.