My first comment on seeing this session was to replay the Jerry Seinfeld epithet "Newman" over and over in my head (LOL!). That is, of course, intentional because what is Newman in Seinfeld? He's a Postman ;).
I have been interested in looking at how I can get more capability away from dedicated tools and to be drivable from the command line. Postman is a neat tool, to be sure, but it is also a tool that tends to require one to actively interact with it. Thus I am excited to hear Christina Thalayasingam talk about this.
Okay so what is the purpose of a tool like Newman? For that matter, why would we need a tool like Postman? The key reason for Postman is it is a tool that is designed to test API interactions. You send data in POST commands, formatted in XML, JSON, etc. and you then get a response based on what you send. This can be super helpful when you want to test to see if parameters are set, or to get a specific value without having to dive into the UI to find those details.
Postman does this very well, to be sure, but again, as I stated in my intro, Postman is a standalone tool that I as a user interact with. I can script it, I can add automation elements but I can't practically run a script to do the things that I want Postman to do. This would be especially problematic in a CI/CD setting. Jenkins isn't going to fire up Postman. Well, it could, but my ability to interact with it remotely would be less than desirable, to say the least. However, the collections I have in Postman are valuable and usable. It would be cool to access those and run them as needed to qualify the API tests I've already created.
Christina pointed out some additional tools they considered including paw (limited to Mac originally though now making its way to Linux and Windows). Postman of course also works well and I'm already using it. Insomnia is another tool that is interesting but has limitations to exporting its scripts and being used outside of itself. The goal here ultimately is ti be able to control API tests with a command-line approach. I mean, if we want to get right down to it, curl can certainly do that (in fact, that's been my typical approach) but curl requires some finessing with what you get back and tweaking your return values so they can be used and shared/validated.
So what does Newman do? It's basically a collection runner. That's it. I could use some global variables and create data-driven testing runs one after another to make sure we don't get errors when we run at scale. We can add delay and timeout values to see how robust our environment is or where we would run into issues with load or performance.
I like how the output comes out as a table when it finishes so that it would be easy to share and view results. That alone is worth me taking a closer look at this. Also, having the ability to create a variety of scenarios to run the same collection with variables and have those options something I can run for multiple iterations sounds like a definitely useful addition.
Post a Comment