I've long been a fan of evaluation and using information―good information―to make decisions. The test and the results (published on Vox) are interesting. If the test results hold true in DC, there is much work to be done to education elected and appointed officials, providers, and the community about what works and what doesn't work.
Test designers Benjamin Todd and William MacAskill (of 80,000 Hours) reach a similar conclusion, not about DC, but about the public and electeds in general. They wrote about it on Vox:
What can we learn from this? Sadly, it isn't possible for the public to know ahead of time whether a nice-sounding idea will actually help people or hurt them. Whether it’s a politician proposing a new social program for young people or a charity fundraiser describing how they are going to help the homeless, neither your head nor your gut can consistently tell you if their approach is going to work. A lot of things that sound good don’t do good, and vice versa.
Instead, you have to get experimental evidence. What trials have been run? How did the people who didn’t get the program compare with those who did? Were they comparable groups? What do experts who conduct reviews of the field’s research conclude?
Reactions on Hacker News are also quite interesting. While the readers may not be social science and human services experts, they do have an interest in science and order. I am particularly enamored with the comment by NateLawson. He suggests all legislation include A/B testing; it is a twist on sunsetting provisions.