November 15, 2014

CAP Should Be CLAP

The CAP theorem, says you cannot have your data Consistent (all updates are temporally ordered to all actors), Available (transactions respond success or failure) and Partition tolerant (tolerant of a subset of machines are unreachable by some actors, yet still reachable by others).

A problem with CAP is that Availability is a confusing word here. In distributed systems it's really Latency we are interested in. Any resource that's down now might recover soon, soon enough to consider the operation a success if it just waits a little longer. And even available resources might be overloaded and take so long to answer as to be practically unavailable.

The confusion is Availability implies a binary choice, true or false, available or not. But degrees of availability are more easily conceptualized as latency. Latency can be measured and expressed as an single value, such as an average or an upper limit. Or a range of values, broken out into percentiles.

We can't just assume because a client can connect that a server is available. It's much more complicated than that. Modern systems are all about resource sharing. Multi-core, multi-tasking machines with multiple independent and isolated processes carrying about semi-autonomously, all using shared networks, NICs, DRAM, processors and drives. This resource sharing dramatically lowers the costs of building and extending systems: We don't need all the resources necessary to do everything all once, we just need enough to satisfy peak concurrent demands.

But the cost of all this resource sharing is we don't have guarantees about how long something will take, or that it will even be successful. Though each process might be well behaved, system as a whole is chaotic. Maybe an independent process is doing large bulk file copies and invoking lots of disk IO. Or temporarily needs a bunch of RAM to do an image rendering, causing your processes to page some of it working set out to disk. Or fail memory allocation. Or get killed by OOM.

And without timeout failures, distributed processes sharing resources can deadlock on each other infinitely, each holding resources the other needs to continue.

It's because of non-deterministic resource sharing we must be able to deal with timeout failures. The alternative is deadlocking or more likely "slowlocking", concurrent operations slow each down to unusable levels, much slower than executing the requests serially. Systems must be able to tolerate such latency failures, since we can't eliminate them without the expense of assigning dedicated resources to everything. Which is how hard realtime systems work. And they are very expensive.

Fortunately we can model any loss of availability as excessive latency and reuse the same error handling. No new error paths need to be created or modified. Restart each failure with an exponential backoff and the system is guaranteed to clear out it's backlog of work if the failures are a shared capacity problem.

This isn't a new notion, but looking at it this way, any availability requirements are re-expressible as latency requirements, and availability failures are simply timeouts. Requests don't fail, they timeout.

CAP should be renamed CLAP, for Consistency, Latency/Availability and Partition tolerance. It doesn't change how anything works, but it makes reasoning about distributed design a bit easier.

Link

November 11, 2014

7 Habits of Highly Defective Testing

Good testing of software requires effort and discipline. Ain't no one got time for that.

1. Tests should be difficult to run.

Give developers good excuses for not running tests by requiring version specific dependencies that don't exist in the production code. Also the more manual steps to set up and tear down test environments, the lower the likelihood they'll get run.

2. Tests should take a long time and require lots of resources.

Tests that run slowly, ideally many hours or even days long, helps keep developers out the zone and easily distractible. To keep development expensive and lengthy, make sure each individual test instantiates fresh instances of everything, new servers, new clients, new datasets, ideally whole new installs. The more CPU, RAM and Disk and Network IO necessary, the greater the costs and non -roductivity.

3. Tests should have spurious failures

When tests fail because the tests themselves are buggy it reduces confidence in the tests, making it much easier to ignore real failures. Any efforts to clean up the tests should be met with suggestions time is better spent growing production code size and complexity.

4. Don't write a tests for bugs found in production.

When a bug is found in production code, don't waste time writing tests that trigger the bug, just fix the code and be done. This increases the likelihood of future regressions, and as a bonus hides the areas of code that are poorly designed and implemented.

5. Tests should rely on timeouts to indicate failure

Sleeps that wait long enough for production code to finish before checking the outputs will lengthen the time tests take to run AND cause hard to duplicate, spurious failures. This reduces confidence that a failing test found a bug, making real bugs easy to ignore.

6. Only test success conditions.

Writing tests for what happens when connections timeout, when allocations fail, or when processes are terminated or machines lose power is like wanting the code to fail. So only test what happens when everything goes right, it makes you seem like a team player.

7. Failures should be hard to debug

Throw away error codes, error strings, log messages, and anything else that could help debug a failure. Writing production code is hard. Debugging it should be hard too.

Link

November 6, 2014

New Messaging and Queueing Project

I'm on a message queue kick.

The most recent thing I worked on at Couchbase was leading the design for the 3.0 Database Change Protocol (DCP), which shipped a few weeks ago. DCP's got some nice properties that allow it to move a lot of data quickly and safely, It's used by Couchbase replication, indexing and a few other places. It's really cool to see how well it's working, I left just as the coding work was starting. I'm always a little surprised when things actually work, I spend of lot of time worrying about design flaws that might have been overlooked.

Couchbase DCP is basically an engine and protocol to quickly receive changes from the database, incrementally and asynchronously. Coincidentally, it's got a lot of the same traits of a message queue and similar design decisions as the Apache Kafka project. The more I thought about it, the more I realized the basics of messaging and queueing is really the foundation of most distributed systems.

So now I'm starting a new message queue project that's loosely based on Couchbase DCP technology (and using the Couchbase ForestDB durable storage engine code as a starting place). But this is being designed and optimized for messaging and queueing applications, so there are some key differences. This can be used like an enterprise message bus, for email and messaging, for stream processing and analytics, and even can apply to high scale, near realtime interactive applications like Uber.

A lot of the uses fall into the mission-critical infrastructure category, which an ever increasing amount of money being spent on. It's when you can find business model to support your technological interests that things get interesting. And I think things are about to get really interesting.

Link

October 30, 2014

Pumpkin Combat

DSCN0876.jpg

Link

October 10, 2014

Tank Man

Years ago, when I was in Charlotte NC working on a new document database named CouchDb, it sounded (at the time) about as sexy as a bowl of diarrhea and I had bunch of code that didn't quite work and our savings were shrinking every month. I was embarrassed to tell people what I was working on. And I made a decision to stop being a coward and commit to what I'm doing. I pasted a sign onto my monitor. "Commitment: Learn it, Live It" as a reminder to be all in on this. To not feel like a fool for pursuing a path when I was in over my head. Every day was a struggle to make progress. A battle to remain committed.

I used to go to the gym and pretend in my head I had already died, and it was 100 years into the future, and I was dreaming of some guy who wrote a database. I didn't know how the story ended yet, I hadn't gotten that far. It helped me to remember life is journey, not a destination. Stay on the path, the story is what's interesting. The success or failure might already be predetermined, so fight like a hero you respected, a hero who lost, who went down swinging. Live that story if you have to, but goddammit, fight.

Link

September 25, 2014

Single Dad

It's been a year since my I "separated" from my high school sweetheart, the love of my life. We'd been married for 16 years at the time, and struggling for the last 6. I left Couchbase mostly because I wanted to get away from the stress of startup life and find a way for us make things work. I cashed in some stock to take time off but things only got worse, and the pain of leaving Couchbase, who made up most of my friends, while also our marriage was filled with conflict, was too much.

I think it's CouchDB and then Couchbase, and how far I took it, that was a big factor for our problems. I spent so much time on it, and was so isolated for so long I ended losing a big part of myself. It happened before the startup. I became detached, anxious and wasn't there for my wife as much as I should have been. And that hurt her. I was also sleep eating. I'd devour all kinds of high calorie food and not remember it. And that also freaked her out. But since I was good at shutting off my own feelings, I couldn't empathize with her and the hard times of raising 3 kids as the primary caregiver, and moving way too many times over the years.

We both tried very hard, but in my case I didn't understand what she was going through. And I felt she didn't understand me either. Startups are hard. But we try to make it seem glamorous. It's part of the game. But it's hard and a lot of stress. And if it doesn't work out, you end up with nothing. And a bunch of people who believed in you lose their jobs.

And in the struggle there comes a point when you realize it's worse for the kids to be married and if things get any worse, our problems will truly mess them up. That all the conflict and hurt is taking it's toll on the ones you are fighting so hard to give a good life to.

The divorce was final a few months ago. The terms of the divorce pretty standard for 50/50 custody, and we were able to complete it amicably, no court battle. Overall it was a peaceful process, but at times it felt extremely painful, scary and conflict ridden. It still does. And sometimes the pain and anger comes back and I behave like a big jerk. But compared to most divorces I think we both did pretty good.

We now live about 1/2 mile from each other in Alameda CA. It's a beautiful area and it's great for the kids. I can't say for sure I'm any happier, the transition has been, and continues to be hard. Being alone is hard. But I think I'm a better father. But still much room for improvement.

My own upbringing I was bounced around a lot. I didn't get a lot of care and attention. And that contributed to low self esteem, which took years to finally understand. I was lucky I didn't end up in much worse shape. My ex wife was a big part of me not being a total wreck, she took good care of me over the years. Soothed me when I needed it. I have to learn to be able to function without that. I need to be there for my kids, I don't want them to deal with the crap I went through, some of which affects me to this day.

Dating when you have 3 kids is scary. I want them to be around good people and have them see healthy relationships. And I don't want to bring crazy people into their lives. But I'm not sure how to know what's what. So I'm trying to be cautious and take it slower. The problem with marrying so young is I didn't experience all that crap most people do in their twenties, when it's easier to make mistakes.

And now, after being married so young and for so long, I have to become a new person. A single person. A single dad. Also I have to grow up more. I have to be able handle anything that comes my way, and be able to do it alone if necessary. My kids actually make this easier. They are my sense of purpose, to raise them right and to feel loved by a dad and mom they can respect.

Anyway, this is the reason I've been quiet so long. The last year has been the hardest of my life. The shame, the fear, the anger, the loss. I didn't want to be a single man at 40 with 3 kids. I wanted so much to avoid it. It's hard to live it. Hard to even admit it. But I don't feel sorry for myself. I caused a lot of it. And everyone has hard times. Mine haven't been so bad. I'm learning from them. And overall, life is quite good. I'm healthy, and my kids are healthy and happy and go to great schools. And Couchbase continues to grow.

I'm taking time off until the end of the year. I'm trying to get me and my kids more integrated in Alameda without a job distracting me, and continue to rebuild a friendship with my ex. As a Dad, it's my job. I'm taking it seriously.

Link

December 16, 2013

What a difference a few months make

4 months off and I feel reborn.This time has meant everything to me and especially my kids.

I miss Couchbase terribly, but I'm also glad to be done and start a new chapter in my career. The thing I miss most are the great people there, super bright hard working folks who amazed me on a daily basis. Which, ironically, was the thing that made it easy to leave. Seeing the different teams taking the ball and running with it without me leading the charge. Things at Couchbase grew and matured so fast I started to realize I couldn't keep up without spending way more time working. I was no longer the catalyst that moved things forward, I was becoming the bottleneck preventing engineers from maturing and leaders from rising.

Anyway, I'll miss my whole CouchDB and Couchbase odyssey immensely. I know it's a rare thing to have helped create and lead the things I did. I don't take it for granted. It was a hell of a ride.

And now what's next? Well, beginning in January 2014 I'll be starting at salesforce.com and working closely with Pat Helland on a project that eventually will underpin huge amounts of their site infrastructure, improving performance, reliability and predictability, while reducing production costs dramatically. It's quite ambitious and I don't know if I'm yet at liberty to talk about the project details and scope. But if we can pull it off we will change a lot more than Salesforce, in the same way the Dynamo work changed a lot more than Amazon's shopping cart. It's ridiculously cool. And we are hiring the best people possible to make it happen.

Here I go again. I'm a very, very lucky man.

Link

September 24, 2013

Human, After All

Whoa, I have a blog. Weird.

As I take a break from all work (I left Couchbase about a month ago), one of things I'm trying to do is get the machine out of my head. Building a distributed database is a peculiar thing. Building a startup that sells distributed databases is a very peculiar thing. It did something weird to my brain. I'm still not sure what happened.

That moment in chess when I see the mistake, and it suddenly feels like the blood drains from my head. For me it's when the game is decided. Win or lose, it was a mistake to play at all. I didn't want to lose. I didn't want to win. I just wanted to play. To keep the game going.

Somehow I developed social anxiety. Not a fear of people. A fear of causing fear in people. I lost my voice. Not my physical voice. But the one that says what it really thinks, is gregarious, is angry, is sad, wants to have fun, wants to complain. The one that cares not about the right answer. The one that just wants to interact, with no particular goal.

I forgot how to be human. I didn't know that was possible. I didn't even notice it happened, I didn't know what I had lost until I started to get better.

I saw this thing in my head, the machine. Automata. It was beautiful. The more I thought about it, the more clearly I could see it. I connected all the dots. It was so compelling. It was engineering. It was physics. It was metaphysics. I had to bring it into the real world. I couldn't stop thinking about it. It could be lost forever if I did.

Most people create differently. They create a little, think a little, create a little, think a little. I like to work by thinking very hard until I can clearly see what should be built. Before I write code. Before I write specs. I want to see it, in my mind. I can't explain what I see. I suppose it's like describing color to a blind man.

There is a hidden dimension. The people who can see it, who can move around in this unseen dimension are special to me. It's like when everyone puts their head down to pray, only you don't. You look around. And you see the other people who didn't put their head down. We broke the rules. But we broke nothing, we just see something others don't. Sacred doesn't exist.

The only language I know for sure to describe it is code. When I can see it working in my head, I know it will work in the real world, in code. Then I move to bring it to the real world through code.

But I took it too far. I thought too long. What I built in my head was too big for a human. Too big for this human anyway.

I was compelled to keep the vision of the machine lit, for fear it would vanish before it made it into the real world. The machine started to take over my mind. No, that's not true. I pushed everything I could aside, squished it up to make room for the machine. Or maybe I fed it to the machine. Or maybe I threw it overboard.

It never occurred to me I might be giving up something I needed, that others needed from me, that I wanted to give to them, to myself. Or maybe I didn't care. I wanted to bring the machine to life. I knew if I could bring it to life, it would change the world. Isn't that worth fighting for?

Fear is a powerful motivator. It's also the mind killer. I was afraid of losing the battle. Creating technology is play. Creating a startup is a fight. But I didn't notice I was losing the war. Everything was riding on this. I no longer played with a posture of I couldn't lose. Now I must win.

Then something happened, and I saw a glimmer of what I once was. I realized I was no longer playing a game of creation, but waging a war of attrition. And my humanity was the resource. I was grinding myself away.

I noticed this almost a year ago. Something profound finally gave me the perspective of what I was doing. I began to heal something I didn't know was broken.

Since then I tried to keep the machine fed, yet under control. But still I couldn't stop. The machine was perfect. It solved the problems, it gave the right answers. If it failed, it did so gracefully, predictably. It seemed more deserving than me. A machine over a human. Now that is fucked. up. shit.

So slammed on the brakes. I'm more than a glimmer. I'm worth more than a machine. I'm learning to be a human. Again. And it's harder than it looks. It's icky. There are no right answers. Only paths and possibilities. Time is an illusion, but it's later than you think.

Strangely, as I try to evict the machine, I can still see it. From a different perspective. Perhaps more clearly than before. I don't know. But I'm not mad at it. It's wonderful. It's Schrödinger's cat. It was already dead. It was always alive. It's not the answer, it's a path. Someday I hope to be human enough to tell you why.

Link

May 3, 2013

Dynamo Sure Works Hard

We tend to think of working hard as a good thing. We value a strong work ethic and determination is the face of adversity. But if you are working harder than you should to get the same results, then it's not a virtue, it's a waste of time and energy. If it's your business systems that are working harder than they should, it's a waste of your IT budget.

Dynamo based systems work too hard. SimpleDB/DynamoDB, Riak, Cassandra and Voldemort are all based, at least in part, on the design first described publicly in the Amazon Dynamo Paper. It has some very interesting concepts, but ultimately fails to provide a good balance of reliability, performance and cost. It's pretty neat in that each transaction allows you dial in the levels of redundancy and consistency to trade off performance and efficiency. It can be pretty fast and efficient if you don't need any consistency, but ultimately the more consistency you want the more have to pay for it via a lot of extra work.

Network Partitions are Rare, Server Failures are Not

... it is well known that when dealing with the possibility of network failures, strong consistency and high data availability cannot be achieved simultaneously. As such systems and applications need to be aware which properties can be achieved under which conditions.

For systems prone to server and network failures, availability can be increased by using optimistic replication techniques, where changes are allowed to propagate to replicas in the background, and concurrent, disconnected work is tolerated. The challenge with this approach is that it can lead to conflicting changes which must be detected and resolved. This process of conflict resolution introduces two problems: when to resolve them and who resolves them. Dynamo is designed to be an eventually consistent data store; that is all updates reach all replicas eventually.

- Amazon Dynamo Paper

The Dynamo system is a design that treats the probability of a network switch failure as having the same probability of machine failure, and pays the cost with every single read. This is madness. Expensive madness.

Within a datacenter, the Mean Time To Failure (MTTF) for a network switch is one to two orders of magnitude higher than servers, depending on the quality of the switch. This is according to data from Google about datacenter server failures, and the publish numbers of the MTBF of Cisco switches (There is a subtle difference between MTBF and MTTF, but for our purposes we can treat them the same)

It is claimed that when W + R > N you can get consistency. But it's not true, because without distributed ACID transactions, it's never possible to achieve W > 1 atomically.

Consider W=3, R=1 and N=3. If a network failure or more likely a client/app tier failure (hardware, OS or process crash) happens during the writing of data, it's possible for only replica A to receive the write, with a lag until the cluster notices and syncs up. Then another client with R = 1 can do two consecutive reads, getting newer data first from a node A, and older data next from node B for the same key. But you don't even need a failure or crash, once the first write occurs there is always a lag for the next server(s) to receive the write. It's possible for a fast client to do the same read 2 times again, getting a newer version from one server, then an older version from another.

What is true is that if R > N / 2, then you get consistency where it's not possible to read in a newer value, then a subsequent read get's an older value.

For the vast majority of applications, it's okay for a failure leading to temporary unavailability. Amazon believes its shopping cart is so important to capture writes it's worth the cost of quorum reads, or inconsistency. Perhaps. But the problems and costs multiply. If you are doing extra reads to achieve high consistency, then you are putting extra load on each machine, requiring extra server hardware and extra networking infrastructure to provide the same baseline performance. All of this can increase the frequency of a component failure and increases operational costs (hardware, power, rack space and the personnel to maintain it all).

A Better Way?

What if a document had 1 master and N replicas to write to, but only a single master to read from? Clients know based on the document key and topology map which machine serves as the master. That would make the reads far cheaper and faster. All reads and writes for a document go to the same master, with writes replicated to replicas (which also serve as masters for other documents, each machine is both a master and replica).

But, you might ask, how do I achieve strong consistency if the master goes down or becomes unresponsive?

If when that happens, the cluster also notices the machine is unresponsive or too slow and removes it out of the cluster and fails over to a new master. Then the client tries again and has a successful read.

But, you might ask, what if the client asks the wrong server for a read?

If all machines in the cluster know their role and only one machine in the cluster can be a document master at any time, and the cluster manager (a regular server node elected by Paxos consensus) makes sure to remove the old master, and then assign the new master, and then tell the client about the new topology. Then the client updates its topology map, and retries at the new master.

But, you might ask, what if the topology has changed again, and the client again asks to read from the wrong server?

Then this wrong server will let the client know. The client will reload the topology maps, and re-request from the right server. If the right master server isn't really right any more because of another topology change, it will reload and retry again. It will do this as many times as necessary, but typically it happens only once.

But, you might ask, what if there is a network partition, and the client is on the wrong (minor) side of the partition, and reads from a master server that doesn't know it's not a master server anymore?

Then it gets a stale read. But only for a little while, until the server itself realizes it's no longer in heartbeat contact with the majority of the cluster. And partitions like this are the among the rarest form of a cluster failure. It will require a network failure, and for the client to be on the wrong side of the partition.

But, you might ask, what if there is a network partition, and the client is on the wrong (smaller) side of the partition, and WRITES to a server that doesn't know it's not a master server anymore?

Then the write is lost. But if the client wanted true multi-node durability, then the write wouldn't have succeeded (the client would timeout waiting for replicas(s) to receive the update) and the client wouldn't unknowingly lose data.

What I'm describing is the Couchbase clustering system.

Let's Run Some Numbers

Given the MTTF of a server, how much hardware and how quickly must the cluster failover to a new master and still meet our SLAs requirements vs a Dynamo based system?

Let's start with some assumptions:

We want to achieve 4000 transactions/sec with 3 node replication factor. Our load mix is 75% reads/25% writes.

We want to have some consistency, so that we don't read newer values, then older values, so for Dynamo:

    R = 2, W = 2, N = 3

But for Couchbase:

    R = 1, W = 2, N = 3

This means for a Dynamo style cluster, the load will be:
Read transactions/sec: 9000 reads (reads spread over 3 nodes)
Write transactions/sec: 3000 writes (writes spread over 3 nodes)

This means for a Couchbase style cluster, the load will be:
Read transactions/sec: 3000 reads (each document read only on 1 master node, but all document masters evenly spread across 3 nodes)
Write transaction/sec: 3000 writes (writes spread over 3 nodes)

Let's assume both systems are equally as reliable at the machine level. Google's research indicates in their datacenter each server has a MTTF of 3141 hrs or 2.7 failures per year. Google also reports a rack failure (usually power supply) of 10.2 years, roughly 30x a reliable as a server, so we'll ignore that to make the analysis simpler. (This is from Googles paper studying server failures here)

The MTBF of Cisco network switch is published at 54,229 hrs on the low end, to 1,023,027 hrs on the high end. For our purposes, we'll ignore switch failures, since the failures affects availability and consistency of both system about the same, and it's 1 to 2 orders of magnitude rarer than a server failure. (This is from a Cisco product spreadsheet here)

Assume we want to meet a latency SLA 99.9% of the time (the actual latency SLA threshold number doesn't matter here).

On Dynamo, that means each node can fail the SLA 1.837% of the time. Because it queries 3 nodes, but only uses the values from the first 2 nodes and the chances of SLA failure are the same across nodes, the formula is different (only two must meet the SLA):

    0.0001 = (3 − 2 * P) * P ^ 2

or:

    P = 0.001837

On Couchbase, if a master node fails, it must recognize it and fail it out. Given Google's MTTF failure above and it can fail out a node in 30 secs, and let's say it will take 4.5 minutes for it warm up the RAM cache, given 2.7 failures/year with 5 minutes of downtime for each before a failover completes, then queries will fail 0.00095% of time due to node failure.

For Couchbase meet the same SLA:

    0.0001 = P(SlaFail) + P(NodeFail) - (P(SlaFail) * P(NodeFail))

    0.0001 = P(SlaFail) + 0.0000095 - (P(SlaFail) * 0.0000095)

    0.0001 ~= 0.00009 + 0.0000095 - (0.00009 * 0.0000095)

Note: Some things I'm omitting from the analysis are when a Dynamo node fails the lower latency requirement from meeting the SLA for 2 nodes vs. 3 (it would drop from 1.837% to ~0.05%), and also the increased work on the remaining servers when a Couchbase server fails. Both are only temporary and go away when a new server is added back and initialized in the cluster, and shouldn't change the numbers significantly. Also there is the time to add in a new node and rebalance load on it. At Couchbase we work very hard to make that as fast and efficient as possible. I'll assume Dynamo systems do the same, that the cost is the same and omit it, though I think we are the leaders in rebalance performance.

With this analysis, a Couchbase node can only fail its SLA 0.9% of the requests, and a Dynamo node can fail it 1.837%. Sounds good for Dynamo, but it must do for 2X the throughput per node on 3x the data, and with 2x the total network traffic. And for very low latency response times (our customers often want sub-millisecond latency) typically meeting the SLA means a DBMS must keep a large amount of relevant data and metadata in RAM, because there is a huge cost for random disk fetches on latency. With disk fetches 2 orders of magnitude slower on SSDs (100x), and 4 orders of magnitude slower on HDDs (10000x) the disk accesses pile up faster without enough RAM, so do the latencies.

So each Dynamo node can fail its SLA at a higher rate is very small win when it will still need to keep nearly 3X the working set ready in memory because each node will be serving 3x the data at all times for read requests (it can fail its SLA slightly more often, so it's actually about 2.97x the necessary RAM), and will use 2x the network capacity.

Damn Dynamo, you sure do work hard!

Now Couchbase isn't perfect either, far from it. Follow me on twitter @damienkatz. I'll be posting more about the Couchbase shortcomings and capabilities, and technical roadmap soon.

Link

January 18, 2013

Development Methodologies?

Hi Damien,

...

If I were to list projects as small, medium, and large or small to enterprise, what methodologies work across them? My thoughts are Agile works well, but eventually you'll hit a wall of complexity, which will make you wonder why you didn't see it many, many iterations ago. I don't know anyone at NASA or Space-X or DoD so I don't know what software methodology they use? Given your experience can you shed some light on it?

Regards,

Douglas

I don't really use a specific methodology, however I find it very useful to understand the most popular methodologies and when they are useful. Then it's helpful when you are at various stages of projects and know what kinds of approaches are helpful, and how you can apply them to your situation.

For example, I find Test Driven Design (TDD) very much overkill, but for a mature codebase I find lots of testing invaluable. Early in a codebase I find lots of tests very restrictive, I value the ability to quickly change a lot of code without also having to change a larger amount of tests. Early on, when I'm creating the overall architecture that everything else will hang on, and the code is small and design is plastic and I can keep it all in my head, I value being able to move very quickly. However, other developers may find TDD very valuable to think through the design and problems. I don't work like that. To each his own.

Blindly applying methodologies or even "best practices" is bad. For the inexperienced it's better than nothing, but it's not as good as knowledge of self and team, experience with a variety of projects and their stages, and good old-fashioned pragmatism.

Link