Interview with Abel Wang and Steven St. Jean (Source Control)

Play Interview with Abel Wang and Steven St. Jean (Source Control)

The Discussion

  • User profile image

    Great stuff, guys!

    I'd love to see easy reports that measure the maturity of a customer's practices here. It's really hard for a team to get better without more insight into what's going on and what metrics to work toward. Maybe things like:

    # of commits between merges

    time between merges

    how can we measure pending changes that just sit out there for days / weeks and then get rushed into a branch with massive changes and little testing?

    branch depth

    # of active branches

    commit size

    story points (or hours) associated with a commit (another indication of size / impact)

    I'm sure there are better ideas for reports, but it never surprises me to see teams languish in bad practices when they have no insight into what to work toward and what their current practices are costing them.

  • User profile image

    @jfattic: Hey Jeff!  That is great feedback. I will see how we can act on it.

  • User profile image
    Ian Ceicys

    Awesome to see the team back together. You guys rock! Keep up the great stuff.

  • User profile image
    GREAT explanations. I watched this all the way through twice and plan to watch again. So, now I have questions, and sorry for the verbosity. :)

    Let's say I'm on an "Agile" team of 3 devs working on a large, monolithic web app. 2 week sprints for development. Using Git. But QA is still manual so longer QA cycle. Don't have much unit test coverage. Trying to follow the strategy that Abel described. Sprint begins, each dev starts a user story. Let's say two of those devs are going to be in the same part of the code.

    If I understood correctly, they are each going to branch off main and create their own PBI/topic/userstory branch locally. None of them is going to put in a pull request to merge back to main until their code is done and they are confident of its quality and stability. I'm thinking maybe the 2 devs working in the same code might be pushing their pbi branches up to remote in order to sync with each other frequently to minimize merge conflicts. Tell me if I have any of that wrong.

    Here's where I'm confused. When a dev is done, I would think that, in the absence of unit tests, part of their confidence that the code is ready for a pull request to main would come from having done a build/deploy to a DEV/integration environment and some manual dev smoke testing there first. Cause "works on my machine", ya know, doesn't always work elsewhere. But if the build + pipeline only runs off main, then am I merging back to main from my PBI branches in order to get it to dev?

    If I am, what happens when dev 1's code gets to dev/int and it's ready to move forward but dev 2's code gets there and there's an issue discovered? Am I gonna hold up dev 1's code from going further until dev 2's code gets stabilized since we're both in main now? How do I not end up cherry picking in that situation? Is this where feature flagging comes in? Would you do that even down at the user story level?

  • User profile image
    Hi Marvin, great questions. Here are my thoughts:

    1) If Dev 1 and Dev 2 are working on code that is THAT intertwined, maybe they should be in the same branch. Conversely, if using git, super easy for them to sync with each other without needing to go back to master or a long living dev branch. But really, do what makes sense for you.

    2) Wait... absence of unit tests???? Does... not... compute... null pointer... Lol. I get it, a lot of projects have mountains of legacy code. I will just say that if you truly want to move at DevOps speed and still maintain quality, you NEED unit tests. Lots of them. I'm not saying go back and write unit tests for everything, but I am saying moving forward, any new feature or bug fix, WRITE UNIT TESTS.

    Ok, had to get that off my chest.

    There is nothing stopping you from having a build queueing off of a PR or branch that just deploys to an integration environment, runs automated smoke tests (or manual integration tests) and after it's verified, then the code merges to master. I like triggering this off of a PR for 2 reasons. PR's are another step that really help ensure quality in your code. That's assuming the team takes PR's seriously (which they should) and don't let them stack up and never looking at PR's. Another thing, having a build and automated integration tests run against a PR in a dedicated environment is awesome. The PR won't even pass until the code builds, ALL UNIT TESTS PASS, and the app is deployed successfully and automated smoke tests are run. If you must do manual tests, that's ok too. Sometimes it is necessary to do manual integration tests. But automated REALLY rocks. Faster, not prone to user/tester error etc. And all of this will just be part of a PR. In fact, the team (humans) will never even need to look at a PR until at the very least, the app builds, unit tests pass and automated integration tests pass. cool

    3) Feature Flags. At the user story level. And these should be done regardless if you are building and deploying and testing at the PR level. PR seems like a more advanced DevOps technique but again, to be truly successful and move at DevOps speed... I won't go so far as to say it's impossible without feature flags but it is SO MUCH NICER with feature flags. Feature flags are also WAY easier than people think. There are a bunch of tools (some free, so paid) that do a great job with implementing feature flags. Granted, adding feature flags and maintaining feature flags is a tax. It adds complexity and work. But from my point of view, this is a tax worth paying as the benefits seriously out way the pain and complexity of maintaining feature flags.

    I could spend 2 hours talking JUST about unit testing and feature flagging but I'll save that for a live talk :)

Add Your 2 Cents