The task of synthesizing programs given only example input-output behaviour is experiencing a surge of interest in the machine learning community. We present two directions for applying machine learning ideas to this task. First we describe the TerpreT framework which uses end-to-end differentiable program interpreters and gradient descent to synthesize programs. We compare the efficacy of this approach to traditional synthesis techniques and explore possible advantages of gradient descent. Second, based on our learnings from TerpreT, we develop the DeepCoder system, which induces programs from input-output examples using a neural network to guide traditional search techniques. DeepCoder achieves an order of magnitude speedup over optimized baselines, and it can solve problems of difficulty comparable to the simplest problems on programming competition websites.