UI Component Workflow Testing – Validating Entire User Journeys by Isolating Front Ends Using OpenAPI

Presenter: Hari Krishnan
Event: Appium Conf 2024
Location: Online

Presentation summary

Are you having to wait until the integration or workflow testing environment to validate user journeys? What if I told you that you can shift left and test entire workflows right on your local machine and in your CI pipeline, leveraging OpenAPI specifications for stubbing APIs? In this experience report, we will explore how API spec driven HTTP stubbing can revolutionise your testing strategy by enabling comprehensive component workflow testing early in the development cycle. For example, how you can go about testing entire user login journey with OTP, including error scenarios like invalid code, timeouts, etc. right from the comfort of your local machine. And the joy of being able to independently develop and test your Front End without having to wait for Back End. And there is more, these component workflow tests can be run on Android, iOS and Web, with Teswiz. Join me to learn more about how we achieved this at MyJio.

Transcript

So welcome everyone to this session about component workflow testing, validating your entire user journeys using API specifications for intelligent service virtualization. My name is Hari Krishnan. I am an enthusiast in the conferences space. I speak at a lot of conferences and I also like to volunteer at them. I’m again honored to be a part of the Appium conference here. My role involves being a transformation advisor, a coach and a trainer.

 

And of late I’ve also been in the API governance strategy business. I am the co creator of Specmatic and purpose and that’s quickly the introduction about myself. So with that, let me jump right in into the content for today’s talk. All right, so let’s say we are building a mobile application. We usually would have a back end for front end to aggregate all the responses from our domain services, and maybe there is a storage involved, there are multiple databases and whatnot. This is a simplified version of what even a small mobile application would involve. This itself is the backend for the application that we are talking about, but that’s our own backend that we control. But that apart there would be external dependencies, third party integration such as your SMS gateways, email servers and payment aggregators and the likes which we have to go through.

 

And obviously our backend itself will have to speak with those external dependencies and our own application, the mobile app also will have out of band communication with these external dependencies in order to complete functionalities. Now this, even though it fits in my screen, while looks small enough, this itself is fairly complicated in order to test it all at once. But any realistic company and the type of case study that we’re going to talk about today, which is MyJio, which is more like a super app, and that is not going to be anything close to this level of triviality. It’s going to be a lot more complex. Now, how do we go about dealing with that kind of complexity? Let me add one more curveball to this. If I’m building this sort of an application, for the most part the feature may not be available in its completeness across all the components. Let’s say most of this is under construction or some of this is under construction. I cannot verify or test features by themselves.

 

Just if I’m, if I cannot even start building these features, so to speak. Right. And I cannot say that the domain service team will finish and then the BFF team will start and then they’ll finish and make their service available to me. Then I’ll start building my application. And in between someone might say the third party, we have not secured the licenses or the procurement for that, so that’s not done yet. So that’s going to take some time. The sandboxes are not ready, or maybe they will be down. Now all of these pieces might play.

 

And I cannot potentially sit saying, you know, my application is, I cannot something I can build until these pieces are available. I need to move forward. And this happens to be my system under test for today’s case study. I need to test this. So neither do I have like, you know, ability to build the application at this point, let alone like, you know, talk about testing. And today’s stage is that, you know, we want to move in parallel, right? We want independent parallel development and ability to deploy with confidence each of these pieces. And if I’m particularly an app development team and I’m focused on the mobile, I want to be able to make progress independent of my other dependencies. Now, what’s the usual answer? Any, you know, anyone in the audience, if you could put that in the chat, that’d be great.

 

What would be your typical way to sort out this kind of a situation? Can you put that down in the chat, please? Excellent. I already see someone saying mock APIs. Thanks. If this one. Yeah, great. That’s great. You use proxy man. I haven’t used it, but I’m guessing the concept is pretty much the same, which is I mock the API so I can move forward.

 

Correct any other responses, any other approaches people have tried. Okay, great. Keep them coming. But meanwhile, I’ll move forward. I think this is a great first step that if the sum has given us. So let me start with that. Okay, so now the service dependency is not available. And like a friend rightly said, I would put a service mark or an API mark in place and move forward.

 

This is a very, very acceptable, uh, you know, way to move forward. And many companies that I’ve worked with and I personally, uh, have also done this in the past, but there’s a major difficulty with this approach. Right. Let’s say I put the service mock together with my current knowledge of what the backend, uh, is behaving. Like, however, let’s say that service has evolved to an API version two. Correct? And they, either they have not informed me of that evolution, or I may have missed that evolution. And the mock that I put together hand rolled it. Let’s say I have used, let’s say via mock or the likes tools like those right where I have like, you know, started up the mock server with some canned responses that, to the best of my understanding how the API will behave, I put it together.

 

Now, the difficulty with this approach is when the service is evolving, obviously I’m not aware of it, and I constantly need to do an upkeep of that API mark. Otherwise this service mock that I have handled is not going to be representative of the actual behavior of the real service. End result, fine, I can independently build my application, I can independently test my application, but when I take it to the integration environment where the real service is deployed, things are going to fall apart. Everyone with me so far. So this is the major difficulty that we were facing even in the team that we were working with. If we had to isolate, we could using API marks, but that API mock had to be in line with the actual service, otherwise it wouldn’t make sense. So how did we go about solving it? Let me walk you through that. But before I get in there, anyone else has any thoughts in terms of how you could solve this problem? Charles Proxy.

 

Okay yeah, Ashwita, thanks for that. That’s also something we used. We could record and replay again the record and replay with Charles proxy. What happens is I would have recorded it, for example, in the beginning of the month, by the end of the month, the services evolved again, which means I need to rerecord and someone has to diligently do that. But that’s also a good technique. But that’s the same difficulty. The way we solve this, or at least the answer that we found, is to have a service mock which is based off of an OpenAPI specification. I’m sure most of you are familiar with Swagger or OpenAPI.

 

Now, having this meant there was an unambiguous documentation of what the API’s behavior is supposed to be correct. Which means now when I bought up my service mock, it is working off of this API specification. And the service virtualization is truly emulating the behavior of the real service. And if the service evolves, the spec evolves with it, and thereby my mock is also evolving with respect. So they are in lockstep. That’s the idea, right? But at least that’s the idea. Now, if I start doing this only on my side. It doesn’t make sense, because for this equation to be complete and to make sure that the provider team or the service backend team is also keeping their side of the promise that the API specification that they’re sharing with me is indeed the true representation of what they are application behavior is right.

 

Otherwise I will be going off of this truth building my mock, and my mock is going to fall apart. That’s not useful. So that’s where their service itself could run the same specification as a contract test, thereby making sure they are keeping themselves honest to the fact that the specification that they are handing out to consumer teams like myself who is building the mobile app, they are indeed helping us with the right sort of specification. So this way we are keeping both these, you know, pieces, the service mock and the service in Lockster. Right? So that’s the immediate intervention that helped with this particular case in isolating it and isolating it with a truly emulator mock. Now let’s talk about the next step here. How does the service mock work, which is based off of API specification? First of all, any expectation that you drop into a mock, like say the charts proxy or a wire mock, or for that matter any mock service, will take a canned response, which is a request and a response. And given this request, you give me back this response.

 

That’s the behavior of any mock server. Now what a service mock which is based off of an API spec should do, ideally is the fact that it validates your expectation or the canned response against the API specification. And if it only matches, then it should accept it. So for example, if the Appium test that I’ve writing is trying to set an expectation with the mock server ahead of its interaction with the application. And let’s say that API, the canned response is not adhering to the API spec, then it should be rejected and it should not get accepted into the mock. Thereby you get fast feedback that hey, I made a mistake in the setting of expectation itself, which means I don’t need to go further and then figure out that maybe without this what will happen is only I’ll test the app itself and move forward and I’ll hit the issue in the integration environment. This is going to prevent that. So that’s the one double click into how a service mock should work, which is working off of an API spec.

 

Now let’s get into the crux of the topic for today. Now that we have the background of water, an API mock should behave like. So this is a screen from the MyJio app itself when I log in. And so in this context of the screen, which I put up, what do you think is a component here? Any guesses? How would you qualify a component here? Can you type them into the chat, please? Nav bar. Excellent. Strand. OK. Menu – each icon. Mobile movies.

 

Brilliant stuff. Yeah. And search bar. Correct. So these are all components. Right. But there is an interesting dimension to this. Components also.

 

Let me walk you through that. Some of you already called it out. So the alerts and notifications, for example, is a component. Then there is a banner. This banner is personalized to me, right. For example, this banner is showing Jio cinema, which means probably I am nothing, already consuming Jio cinema. If I am, then it doesn’t make sense to waste that real estate to, again, show that to me. So that’s personalized.

 

There is an aspect to it. Then there is the recharge section, the prepaid specific section. That is, again, because I’m a prepaid customer. Now, obviously, the screen is very, very reactive to, or rather it is customized to the person who’s logging in. There is the profile of the person who’s logging in, what cohort they belong to, and, and then there is another angle, which is basically the features themselves are being a b tested. And some of the features may be something that we want to canary release. We don’t want to release to all cohorts all at once. Correct.

 

So if we are doing all that sophisticated work, then the screen looks very different to each individual or each cohort that they belong to, that they log in. And given this is a super app, this is going to have so much more dimensionality to it, and how do we go about testing it? So that’s the big question. That, and the challenge that we had in front of us right now, given this situation. Sorry, I’ll just finish off one more aspect, which I forgot, which is even in this, if I had to test only a screen, it doesn’t really make sense. So, for example, if I’m looking at a payment flow, let’s say I’m trying to get a prepaid, I’m trying to recharge. It doesn’t make sense for me to test only one screen to say, hey, did the payment page load, no, that doesn’t make sense. Did the payment completed page load? Doesn’t make sense. They are all part of a certain workflow, the user journey within which the testing is actually useful and is valuable, which is where the workflow testing into this component is also important in the sense that it’s.

 

How are these components interplaying with each other. How are we going to verify that in isolation? And that’s the crux of this component workflow testing. Now, how does it fit in to the test pyramid itself? So most of us, while we are writing, let’s say, a mobile app or embedded web application, we already have a unit test in place in the sense that we have tested the classes, whether the screen independently loads up, for example, in a react application is a simple JSX loading up or the screen components rendering in the right place, given the data, we have isolation tested it at that level, each section of the page. Now the next step comes to contract testing, which I already shared a little bit about in the earlier slide, where I have taken an API specification, so to speak, and run it as a test against a service, and thereby I’ve verified whether that service is adhering to the spec. And likewise I’ve used the spec as a stop file in order to isolate myself and verify it. Right, so again, the contract testing is also taken care of. Then comes the component tests and the API test, wherein I am at least verifying independent sections within an API or within a UI when it comes to a mobile app. Right, so there again, all of this we are building up the connections to it until this layer, to say slightly larger pieces, we are verifying.

 

Now, what is the difference between a component test and a component workflow test? So far we have never really verified until this layer, we have not verified any journeys or any multiple step process. Only when we come to the component workflow test are we going through the entire journey, which I believe is a very, very high value test and extremely important to identify if our journeys are intact, because that is what gives us confidence to move to higher environments and also in general, to be able to push features and help improve the time to market. So that’s the difference, key difference, which is until the component testing, we have only tested independent pieces in sections, verifying the interplay between multiple components, verifying across multiple screens and the journey is the crux of a component workflow test. Okay, so now that I’ve spoken about where it fits into your test pyramid and what is the rough percentage that it should occupy in your overall test profile, let’s look at what are the challenges now. For argument’s sake, let’s assume we have the good fortune of the entire architecture being available to us, deployed in its full glory, and we can go about testing it right now. What are the first issues that we will see first, obviously deploying the universe itself. I have to deploy my services make sure I have provisioned the sandboxes of my, you know, third party vendors and all the connectivity is in place, all the teams are aligned. Everything, all the particular drops that have to happen to each particular section have been done.

 

Deploying this universe is not easy. And repeating this process is not pain free. And it’s going to be extremely time consuming. And if you’re going to verify multiple journeys every day, this is not going to scale. Second is the test data management. Considering the complexity that I already shared with you, in terms of the crisscross of what can happen, in terms of the cohorts at play and the features that we have to build and the user profile and which user sees what the trying to put that matrix into the db, making sure all the test data exists for it is a nightmare. And maybe you will even do that. But then handling the residual data from that is not a joke.

 

It is a tremendous amount of effort. Thirdly, there may be, even if you handle that, there are time bound transactions. Certain times you have a TOTP, a time bound OTP, or you have certain offers which are alive till certain date. And I create the offer to make sure it is at least alive until the duration of the test. And maybe it has to be post dated or past dated. All those difficulties also creep up and simulating all of that at multiple levels. This orchestration is not, is not simply possible given the nature of the MyJio app, because it is extremely, extremely complex and it has the, you know, there is that much functionality to it. That’s, that’s the main crux there.

 

Then comes the nature of your third party itself. They may offer sandboxes, but they’re going to be rate limited, right? Or they may not be available at all times. We cannot depend on those sandboxes in order to certain, you know, say that, you know, we have a test environment which is dependable and repeatable. All of this leads to an overall fragility factor, which makes it almost impossible to, you know, look at your workflow tests at a lower environment. Right? Which is exactly why we went with the approach of service marks, because I could take all of this complexity away and shift left and start testing my application in isolation. Now, just to give you an example, in the previous architecture, if I have to start testing for a login OTP flow, I need to make sure that I enter a phone number. It goes to the backend, the back end will talk to the OTP server, the SMS server. The server will then send that SMS to me, which means that device should have an IMEI number plus a SIM module on it.

 

Only then can I get the SMS. And then I have to, you know, enter it and move forward. Sometimes there are delays with SMS delivery, all of that, and forget about even testing it in an emulator, right? Because you cannot have SMS is delivered to an emulator. So even if you say that it’s possible to test it with a lot of effort, there are scenarios like this where it’s not technically feasible for you to do that because of such cases. So how did we go about testing this? Very straightforward. I have my service mock where I already set up my request response file, the canned response to deliver the initial response for the home screen or the login screen. Once the login screen came in, I enter my phone number. The phone number has to go to the mock server.

 

Again, the mock server already has a canned response. Given this phone number, assume it’s a actual phone number and say it’s send them to the next screen. Right. We go to the OTP screen because it’s a mock. I already know I can hard code the mock OTP to a certain number, right? For this phone number, I don’t need to depend on a third party or, you know, have to depend on having a SIM. I can directly have the SMS, you know, assume that the SMS is such and such and move forward, and thereby I have the freedom to move forward with an emulator. And once I am logged in, I fetch notifications, which is that icon which you see at the top.

 

And then I fetch user specific config based on which the home screen configures itself on various micro frontends, load up according to the profile of the user. Now all of these expectations which we call expectations or examples, the beauty of it is it’s being verified against the OpenAPI specification of the backend. Again, this, to reiterate, will give you a lot of safety and confidence in the fact that you can thoroughly depend on the service mark, because this service mock is truly emulating your backend. And your backend is also held accountable by the fact that this API spec is run as a contract test on that side. So that’s how we gain the confidence there. So now moving forward, let me quickly show you a demo. And the demo, I have recorded it for the purpose of convenience and in the interest of time. So here you would see at the top, not sure how legible it is.

 

We have two scenarios. Invalid login and then we have recharge plan. Most of these tests are written in a framework called Teswiz. Anand in the first talk today has spoken about it, so I’ll skip through the details here. So the Teswiz framework is integrated with Specmatic and then Specmatic is serving the, you know, the stub responses. Let me kick off this test. So this is launching the MyJio app. And then once it launches the MyJio app, it will go into the OTP flow first it will verify for an incorrect number.

 

Obviously if the number is incorrect, that is also being validated both at the front end and at the mock level. Like the real server would behave. And in this case this is a real OTP that I mentioned about. So you can actually verify that it is actually behaving in that manner. And once I get a real OTP back to the server, sorry, I say, when I put the OTP which is pre cooked into the mock server, then the mock server responds with the success response and the home screen loads. And now it’s fetching the notifications and the configuration for this user in order to display the specific sections for them. And then we can verify whether this user is a prepaid user or a postpaid user, whether it’s behaving according to that. Now we’re going through a recharge flow.

 

Now once that guy is logged in, we can figure out whether there’s a recharge flow that’s also possible. I’ll skip through some of these and then take you to the section a little later. And all of these details. We were pushing it to the report portal and then we could see screen by screen what’s happening and all of the sophistication we could achieve. The fact that we are isolated from the backend and the test could be independently built and completely verified on multiple devices also. So that’s a quick walkthrough of a serial test where I’m going one by one. Beauty of this is the fact that the mock servers are stateless, which means you can hit them with any number of requests you want, is exactly what we did, which is basically we started doing parallel testing. Now that we are have not dependent on a real server which can have issues such as is the data being set up correctly, will it load? And if there are multiple tests running, will there be concurrency issues in terms of data being updated? That’s not a risk.

 

Here we have fully isolated mock environment, which means I can run, launch two emulators or as many devices as I want and parallelize the tests, which means I can go much faster with my workflows. And of course I can test multiple workflows in parallel. So here we have OTP login flow as well as a recharge flow going in parallel. And that means I have the flexibility to do this without having to worry about whether my backend systems are going to be able to handle this kind of the setup. Okay, so with that, let me pop back in, into my presentation again. So this setup also gives me flexibility to simulate certain faults. Right. For example, I could say if the OTP is wrong, what is the behavior that I need to look at in the screen? The second thing which I can also simulate is delays, network traffic delays.

 

And what if such a delay should happen? Does the mobile app crash? Or does the mobile app go into infinite Spinner? Or does the mobile app show appropriate message saying, this is taking time, can you try again? Is that appearing so such things. It will be very hard for us to emulate with a real service in the backend. But when we have our own canned responses, we can simulate with Specmatic that there is a delay in this particular request. If this is the phone number that’s happening, I want to be able to verify that if there is a delay. How about if the error messages are correctly being handled? I have a moving forward. How did this all fit together in the overall test architecture? I showed you everything running locally and whatnot. But then this is part of a larger sophisticated test suite. So the one tenet that we wanted to follow is that any test suite that we build has to work on our local machine.

 

It should work in the CI and this environment called FMRL environment for application testing where we want to spin up a few more of internal services and test them. Now for this to become uniform, we use Teswiz so that we could test across Android, apple and web and also simulate multi user scenarios. And that’s the test setup on the, running on our local CI or eat a. And the device farm itself was browser stack. So every all the devices we were using. But out of that, so testwis would obviously push the app into the device which is running on the device farm and then Specmatic stub server would start up on the local environment or in the CI server wherever it is running. And the expectations would be set. By expectations, I mean the canned responses that I already showed you, those would be set up so that in preparation for the test was test that would run on the application.

 

Now the application launches the test and starts clicking through the screen, going through the journeys. And then this app has to talk to Specmatic stuff server. Now there’s a question here, how would this work? Because the device is in the Browserstack cloud and Specmatic stuff servers running either on my local machine or on the CI server. How would we go about doing this? That’s when we use the tunneling approach, where Browserstack has an ability to open up a tunnel to the environment from which we are testing it. Which means all these requests could now pass through to Specmatic from Browserstack device farm to the local environment. And finally, all of this was set up with Applitool Ice. Again, I think Anand may have discussed about it, or I’ll leave it to some of you to look it up. It’s for the visual AI testing.

 

And also all of these reports would go to the report portal. So this setup, any of us, the developers or the estates, we could run it on a local machine, verify it as easily as we could, and the very same setup would uniformly run even on the CI servers. The beauty of this is the fact that on the device form, since we could scale the number of devices, we can also test a lot of these workflows in parallel. So that’s how we set it up. And this made way for the next set of tests, which is ideally what we wanted to go about, which is, sorry. The one difficulty that we were facing at times here is the fact that the API specifications may not be available for the backend that we’re working with, either for that particular scenario or for an entire journey, which we will have to go over. So what we did is using the Specmatic proxy recording mode, we would put the proxy in between the mobile device and the backend in a, let’s say a staging environment or some sort of a test environment, and let the app talk through Specmatic to the MyJio backend and then come back. So on, the proxy would record all these interactions and then generate the OpenAPI specification in addition to the expectations for the example files that I already spoke about.

 

Because these are generated, obviously they are going to be adhering to the OpenAPI spec. And this gives us a starting point to move forward instead of having to manually capture what is the API spec that is being exposed by the backend. Right? And this also give us a more accurate representation, because given sufficient traffic passing through the proxy, we could accurately model what is the API specification, which is actually modeled after the real behavior, rather than being a documentation. Now, once I have, sorry, once I have the API specification, I do not have the need for the real backend anymore, which means now I am in my isolated test setup and I start up my step server, feed the API specification and the expectations to the stub server and the apps just start. So literally what I would do is to go through the login flow, start the mobile app, put the proxy in between, run through the interactions, and once I’m done, I have a bunch of API specification and example files, take them out. I don’t need to talk to the server anymore. In fact, the way we would test it is you completely get off the VPN, completely get off the network, and then we could isolate and test it on a local machine with the step server. And that’s the beauty of it.

 

Which means now these OpenAPI specifications and expectations could be shared across the team, and each team member had the ability to independently start the app and test it. This proved to be a very big productivity boost for the front end, because the mobile app now is not tied to the fact that something is available in a staging environment or in a higher environment, which they have to depend on. Ultimately, it’s not just the story of the local and CI, right? In the local NCI environment, we simply run the test Vista suite, verify across Android, Apple Web, and the various plethora of environments across the various combinations. And here we could just do with stubbing out MyJio backend itself and complete. But this paved way, this setup paved way for the fact that once I’m done testing the mobile app in its isolation, it also gives me the opportunity to test my internal back end, right? So this is an environment for application testing, or fmrill environment for application testing. Here I do the same test suite. It’s reusable, right? It’s completely, you know, it’s oblivious to the fact that what’s running there and MyJio app is pretty much remaining the same. And here I run my own backend.

 

And I let that completely work because I want to verify if my app is integrated with my MyJio backend. And this time around, external dependencies, I stub them out with Specmatic. So that way, the test that we wrote early on to make sure we can verify workflows for the app in isolation, now scaled to the point where I could say that my external dependencies are still going to be having those kind of difficulties, I cannot depend on always testing with the real fellow. So there we said we’ll test the front end against our own backend, but the external dependencies will stub it out again, the same technique, rinse and repeat where API specifications are available. Great, reuse them where they’re not available, we could always record them with a proxy and move forward. So this is the approach that we used to go about getting this tested and that was the entire journey in terms of how we achieved the uh, workflow component, uh, UI component workflow testing by stubbing out the backend based on API specifications. Now I stop there and I’m happy to take questions now.

 

Yeah, so we have a couple of questions I think first we can start with someone. Anonymous uh, this is like only common functionality can be tested with uh, this mock services. Am I correct for anything that is new we have to wait for development and then only we can test.

 

For common services. I’m trying to look up the Q&A 1 second anonymous this is like only common functionality can be tested with mock services. Am I correct for anything new? No, in fact it is the opposite, right? For anything new is where you have like, you know, you can obviously test the common services already built, you can isolate and test. But for anything new, if the backend is not ready and you want to start building the front end and you want to start testing the front end, that is where it’s even more useful because the front end and the backend can agree on an API specification and capture their agreement on the OpenAPI spec itself, that this is how the API is going to behave. These are the methods, these are the operations, these are the schema of the responses. Once I have that in place, I can independently move forward. I’ll start setting up my own test data and the test data is always going to be validated against the spec. So I can start building out my front end based on the API specification and move ahead.

 

And likewise in parallel, the backend team can also start using that API specification as an acceptance criteria for building out their backend and they become more like if they get all the tests passing based on the API specification, then they are also done so we both can move independently and in parallel. I hope that answers your question.

 

Then we have another one from Sourab. Do we have this as open source some GitHub link to visit the code for this setup?

 

Certainly. So Specmatic is available as an open source project. I will just paste that link in the chat. You can also look it up yourself. And yeah, I’m pasting the GitHub project link in the chat.

 

Right then we have another one from Ibtsam for device to talk to Specmatic local server did you just use Browserstack tunneling or you had to make some changes in your app code as well?

 

Very good question, very good question so in the apps, usually what we do is we have multiple profiles, let’s say a dev profile, a test profile, and a production profile, right? In the dev and test profile, I will point the backend server URL will be updated to point to Specmatic. So when I’m starting the app, I will have to give it the profile name in the context of which I’m starting, and then it will accordingly talk to that particular backend URL. So all I’m changing is based on the profile which URL you speak with, and that’s it. And obviously when we are shipping the code to production, most of these, depending upon which stack you are on, will not even get shipped, right? You’re pretty much pushing a configuration and the configuration will say which server to talk to. So there is no code change, so to speak. It’s a config change follow up question.

 

From him so does this service mock mean that you mocked the entire back end or only partial requests were mocked for your backend?

 

So like I said, there are two use cases. One is the entire back end was mocked for the UI component workflow testing in the lower environments like my local or in the CI. But then when I’m trying to test this application in my FML environment for application testing where I have my own backend, so my own backend is still the real service that’s working. But then any third party services are marked with Specmatic. So ultimately the principles and concept of how you mock a service remain constant. The context in which you’re using it decides what you want to mock. So if I’m wanting to isolate only the app and test it, then obviously I’ll completely mark out the entire back end. But if I’m maybe in a different setup where I have confidence in a certain service and control over certain service, and I do want to include it as part of my system under test, then that would not be mocked, everything else would be marked.

 

So the question comes down to what? How are you defining the boundary of your system under test?

 

How frequently you run proxy recording?

 

That’s a very good question. The proxy recording, how often you do it is going to decide the, what do you call the, like, you know? So there are two answers to it, right? Let me start with that. Most of the time the proxy recording as an approach was used to give a head start to the team. But today I do not have an OpenAPI or a Swagger in my team, right? So for me to put that together or generate it from the code, there are certain difficulties and adds an additional burden. Instead, we start with the proxy. We record the current version of the API specification, and that gives you, gives the team a head start going forward. Whenever they need to make a change to the API, they will use that API specification as a medium of communication and collaboration. They can make changes to it, check it into the central contract repo.

 

We maintain all our API specifications in a central repo in git and we work off of that. And that is where, that’s basically in this model. What I’m talking about, the proxy is acting as a utility to give you a head start to obtain an API specification to start with. So that’s internal teams. But if you’re working with a third party where the team is probably never going to provide you with an API specification, or maybe they do not have it in their immediate roadmap to give you one, then in which case we had a jobs which would run nightly proxy, run the proxy, and then once you generate the API specification, we would run that spec as a test against that service on a periodic basis. The moment the test starts breaking, it means their APIs have either evolved or changed. We could also let them know that they are probably broken backward compatibility with us. And at that point that’s also signal to rerecord the API spec.

 

So those are the two models in which we have used them, not just in this team, across several teams. Good question, by the way.

 

So we have a question from Indra Neel. How are you covering Kafka, gRPC or similar flows?

 

Excellent. Again, as much as we like to cover request response and synchronous flows, we also have even driven architectures which are like a very big part of any large scale systems. And any large scale system cannot do with purely request response. It has to deal with async. So the Specmatic stuff and service virtualization server that we use also supports gRPC and GraphQL. So in many cases, not in this particular app, in other apps, whether there are graphql backends for aggregating across multiple services, we were able to start out with GraphQL, and with gRPC we were able to do the same thing. And of course with Kafka. Async AsyncAPI specification is something that Specmatic adheres to also.

 

And AsyncAPI specification under its wings will help us documents the ability to capture Kafka or Google Pubsub or JMS to that matter. All of those are also covered and we have stuff those as well.

 

We have a question from Tang. How can we utilize the mock stub server with the TDD development mode.

 

Mockstub server with the TDD development mode. That’s an interesting question. So usually the way I look at it is I want to start building my front end. So in which case I like if I were to write a TDD flow in a regular class setup, right? I would, there are two, three aspects I would look for. One is when I build out the test, I write my test first, and then I say I want to test this particular class and then that class will probably have a dependency which does not exist. So that class I would have to mock out ahead of time. Correct. Using some mocking framework like mock Ito or unmarker based on the flavor on the language that you’re using and that I would, you know, mark out at set expectation.

 

Like when you receive this method call from my system under test, then you give back this response and thereby then I isolate my system and I verify and test it. So this is a typical TDD flow. Now elaborate. You know, sort of extrapolating this TDD flow from a single class to an entire mobile application, it would look pretty much conceptually similar, which is I am writing an acceptance test at the very last level, like maybe with the test list or something like that, where I write an acceptance test for the mobile app to say, hey, if I click on this button, I want this particular list of messages to open up. And for this to happen, the dependency backend service needs to return a list of messages, but that service doesn’t exist yet. So what do I do? I use a Specmatic mock, which is a service virtualization, which is a wire compatible service virtualization. So from my test based out set expectation that when this request comes to, you give me back this response. So technically speaking, even without the backend, I can test drive the development of my front end.

 

So this is one part of the answer. Now, in doing so, I have an API specification for the backend. Right now that API specification itself becomes a contract test, and that’s the beauty of it. For the backend team, I can hand this API specification to them and they also have a test driven development workflow by virtue of running the same spec with Specmatic as a contract test on the backend. And they have free tests to leverage and that gives them the ability to do TDD workflow on their side also. It’s a very interesting question. I hope I answered to the best of what I could.

More to explore