Hi there, I'm the founder of a company called Stoplight, and we have a purpose built solution for this very use case. You can read more about it here: https://stoplight.io/platform/scenarios.
Basically, you setup test cases for your API(s), and we automatically contract test the inputs/outputs of the requests against your OAS specification where possible. If anything does not validate against the schemas defined in your OAS specification, the test will fail with descriptive errors. If your OAS is ever updated, those changes will automatically work in the tests, since the tests are just referencing the OAS spec (not duplicating data from it).
A couple more things:
- You can create these tests with our visual UI, or write the underlying JSON that describes the tests by hand.
- You can run the tests inside of our UI, or install Prism (our command line runner) to run them completely outside of Stoplight (CI, terminal, etc).
- We plan to support OAS3 later in Q4 of this year.
We live and breathe API tooling and specifications. If you have any questions about process, our product, API strategy, etc, happy to chat - just shoot me an email at marc [at] stoplight.io!
Hi HN, we originally developed this for internal use over at StopLight, and realized that a lot of the tech could be re-purposed and re-packaged into a nice little complementary tool for anybody working with Swagger. Thanks for taking a look - happy to answer questions, and eager to hear any feedback!
Really great feedback, thanks! We've heard that the main navigation is confusing, and are working on improving it as I type this. Could you send me an email marc @ stoplight.io? I would love to bounce some of our ideas off of you, and hear what you think.
Astute catch! We're actually tracking this issue internally already. We'll be beefing up that generator to aggregate multiple values for the same property, when it's an array of objects. Look for that in an update in the next 2 weeks.
Great question! We are considering adding first class support for Blueprint. In the meantime, we recommend converting first to OAI/Swagger with the awesome tool over at https://apitransformer.com
You can invite team members to your workspace in the API Designer, and only they will have access (read or read/write) to the APIs in that workspace.
From the API Designer (which is private), you can optionally publish any of your API versions out to api-docs.io, for public consumption. You don't even need to publish your entire API definition out - you can mark which endpoints/models are private versus public. Only the parts of your internal docs marked as public will be published out :).
It really is free, for individual use, forever :). Pricing starts at the team level - at $8/month/member. We're also introducing cheaper, annual pricing soon.
The hosted documentation integration is completely free - we have a paid option coming soon, that adds custom domains, theming, analytics, etc.
Yeah, the copy kind of indicated to me that there was some pricing scheme coming, but there is no indication (or info) about that on your site. Just something to remember if you're going to take a little time to add the Pricing details :)
How complete should the published documentation be? I see the headers I defined, but not the query params.
Also, I define the OAuth2 security in the version, but it shows up nowhere else except for the exports. It would be nice if you define which resources are secured by what. For instance all of my resources require apikey as well as an OAuth2 token.
Interesting, query params should be published - could you send me the link to the docs you published?
In the "settings" tab for each endpoint, you can mark which security schemes apply to the endpoint. It's kind of hidden, we're moving it to the general definition tab. Also, that info will be exposed in api-docs (which security schemes an endpoint requires/supports) in an update later this week!
Hey! Marc MacLeod here - founder. Basically, you put Prism between your API consumer (mobile app, website, integration tests, library, curl request, whatever) and the API itself.
Prism processes the traffic that passes through it, and generates your spec code from that. It can identify dynamic parameters in the url, build json schemas, etc.
Basically, you setup test cases for your API(s), and we automatically contract test the inputs/outputs of the requests against your OAS specification where possible. If anything does not validate against the schemas defined in your OAS specification, the test will fail with descriptive errors. If your OAS is ever updated, those changes will automatically work in the tests, since the tests are just referencing the OAS spec (not duplicating data from it).
A couple more things:
- You can create these tests with our visual UI, or write the underlying JSON that describes the tests by hand.
- You can run the tests inside of our UI, or install Prism (our command line runner) to run them completely outside of Stoplight (CI, terminal, etc).
- We plan to support OAS3 later in Q4 of this year.
We live and breathe API tooling and specifications. If you have any questions about process, our product, API strategy, etc, happy to chat - just shoot me an email at marc [at] stoplight.io!