Zookeeper in AWS
If you've setup a highly available, auto-scalable solution in AWS, it is necessarily stateless, since machines can pop into and out of existance. That's fine if your state is stored in a db on some other machine, but most of the server software you might want to migrate to the cloud probably wasn't designed for it and probably contains som not-very-often-modified config files.
One option is a shared file system (either S3 or EBS) backed, that just contains your config files, potentially symlinked from their native locations. This is described on stackoverflow and turnkeylinux but can have the significant disadvantage of a single point of failure. If you file share goes down, all your instances go down. Another option is a share system like zookeeper, although unless that solution is distributed (zookeeper is centralized) it suffers from the same single-point-of-failure. Amazon S3 is a highly redundant system, however it has failed in the past and is not ideal for incremental changes to files. Since zookeeper support clustering it is possible to run it on each instance in your auto-scale group thus making it as reliable as the applications that depend on it.
Zookeeper requires that the machine clocks be synchronized, since it looks at last modified dates. RolandPJ@AWS claims that in EC2, the system clock time is based on the hardware clock, so you don't need NTP servers. But that's from 2007 and it's not clear that it remains true. On the first two boxes I checked (in the same availability zone) the system clocks were more than 1minute out of sync.