Unshackling Your System Under Test: Shift-Left Testing Through Dependency Isolation

Presenter: Hari Krishnan Joel Rosario
Event: Selenium Conf 2024
Location: Online

Presentation summary

Unlock the full potential of shift-left testing by liberating your System Under Test (SUT) from real-world dependencies in your component tests using Selenium or Appium. This presentation delves into techniques for stubbing and mocking HTTP services, Kafka streams, and database interactions, enabling developers, SDETs and QAs to run robust and fast tests directly within their local and CI environments. In this demo of service virtualisation and fault injection, attendees will learn how to create high-fidelity simulations that facilitates early defect detection to streamline the development process.

Share

Transcript

Welcome to the session by Joel and Hari on unshackling your system under test before I. So basically it’s 45 minutes session and last five to ten minutes. We’ll keep for Q and A over to Joel and Harish.

Hari Krishnan:
Thanks a lot Sahil, for introducing us. So let’s get started. All right, so welcome to this presentation about unshackling your system under test in order to shift testing through dependency isolation. My name is Hari, I am a transformation advisor, coach and trainer and I’m an API governance strategist. I take keen interest in a lot of conferences. I volunteer for them and speak at a few of them. These are some of the conferences I speak at. That’s quickly about my.

Hari Krishnan:
I’ll hand it over to Joel.

Joel Rosario:
Hi, my name is Joel Rosario. I’ve had around 20 years of experience in the industry and I currently coach teams around engineering excellence. I’m one of the authors and contributors to an open source tool called Specmatic, which helps you do integration testing without needing any integration environment. Hand it back to Hari.

Hari Krishnan:
Thanks Joel. So let’s quickly jump into the talk so that we can cover all the ground today. Before we get into the thick of things, let me put a lay of the land in terms of the application architecture for a system that we’d like to test. So I have a mobile application which talks to a service which in turn pulls data from a database. And it also has to drop a message to a Kafka server through which an analytic service gets information about what’s being queried and then the response comes back to the application. So this is typically all the components that are there in our application that I would like to test. I want to put a selenium test together for this. What are the difficulties that you think that you would face in terms of testing the system? I have tested this and a similar system, and there are future challenges that immediately that came back to me.

Hari Krishnan:
I’d like to understand, like what are the challenges that you foresee in testing such a system? Can you please drop those answers in the chat? Any difficulties that you foresee that might happen in such a system? Several moving parts. Yep, that’s a very good point. Connecting to Kafka if it’s in a private subnet, difficulty in integration tests, real time events, third party services being down. Excellent point. Yeah, it’s integration hell all the way. And I’m glad you brought up all these points because, yes, data integrity testing, Kafka, synchronization problems, the JDBC connection issue I guess it feels like I just worked with all of you on this application, so thanks for calling that out. We had very similar issues also. Primarily the issues that we faced were complex test data management.

Hari Krishnan:
I had to prime the data ahead of time and then the database size and all of those difficulties and the jigsaw puzzle of putting all these pieces together, getting them deployed and then writing a test against it is not quite easy. And the repeatability of the test itself is compromised because of the complexity and of course many more, unlike what you rightly called out, it’s not an easy, you know, puzzle to solve. So what did we do about this? Well, let’s understand this system in a step by step manner. Right. So my tests right now I’m trying to focus on this mobile app which I want to test, but my focus, I’m going to put it on that and say that’s my system under test. And if I take that system under focus, the immediate dependency for me is the service itself and I’d like to isolate from the dependency so that I don’t have to deal with these difficulties. Right. I want to be able to test it in a controlled and repeatable environment.

Hari Krishnan:
So I don’t want to be troubled with all the difficulties. So what do I do? I could roll up a mock server and immediately I am isolated from my dependency. I don’t have to have the difficulty of having the Kafka or the network connectivity issues or on deployment environment. I don’t have to worry about any of those. This is great. But then this is the problem that I hit. This service evolved and then the developers had a v two version of it. I was not informed or probably I missed the update that I should have got.

Hari Krishnan:
So thereby I was running off of a setup which was not representative of the actual service itself, but my service markers representative of some understanding that I had of all this service behaves in the past and now it’s already moved beneath my feet. Now I was in a difficult situation. Right. I’m having a wrong test setup. How can we go about solving this? Any thoughts? That’s nice. Thanks for giving a nice segue into some of the topics that we’d like to talk about. For one, that’s good. Calling out contractor testing.

Hari Krishnan:
That’s beautiful. Stubbing the data. Yes, we would like to stub the data, but we also want to verify that the stub data is representative of the real server. That’s the kind of difficulty we are facing. That’s good. All good points. So let’s get into it, right. What we did is we needed a way for the service mark to repeat, only representative of the actual service itself, that it’s, you know, helping us isolate.

Hari Krishnan:
So what we then did is took the API specification of that service, which we are isolating, and then ran that itself as a service virtualization server. Thereby this is representative of what’s happening, right? And this is also a lot less effort. I, because when I had to hand roll the mock, I had to write some code to get that mock running, or I had to put the stub data together myself and guesswork through all of that pieces instead. When I have an API specification, if I choose my tooling right, I could stand up a service practically with no code. That’s the beauty of it. Unlike how Pawan was calling out, I could then make sure that my counterpart, which is the service provider, is also keeping their side of the promise, which means this API specification is run as a contract test on their side, which means I don’t have this difficulty which I had earlier, which is if the service evolution happens, the specification changes and thereby I would know immediately and my service mock evolves along with what is happening. That’s great. Now, with this setup, there are furthermore advantages apart from just the fact that you have these two systems in lockstep, well, you still have the independence of independently moving forward.

Hari Krishnan:
Any system, any service mock that you build is based on a concept called canned response, right? Like our stub data, like NeHA was calling out. Basically it is if this request, then return this response, basically you’re setting it up ahead of time, that these are the expectations expected requests, these are the responses that you want to send back. The beauty of a service walk that is based off of an API specification is you can now have the stub data validated against the spec itself to say if it is actually adhering to that specification. Is my sub data even correct? I mean, I have made this mistake, right, in the past. What I would have done is I just assume that certain field is a string or an integer, and I have moved forward with it, and maybe I would not even know that this particular step data is not in line with respect, right? But now when you have a service mark that is based off of the API specification, then you, the validation is going to happen that, you know, is this stub data according to the schema that’s described in the specification, and only then would it be accepted into the service mark. Otherwise it’s rejected right away. So that’s the beauty of it, having your tooling such that youre stub data is never stale is super critical. So that way your always dependable and immediately when the service back evolves, your service mark is evolving along with it and it’s giving you immediate feedback.

Hari Krishnan:
So that’s an important piece that we had to tackle. Hope that is clear. Okay, now what more can you do? There’s a lot of interesting stuff that you can do with service marking, right? Let’s say I want to, I have written some code in the app to handle some resiliency situations. For example, my service could be down, it could be slow to respond, or maybe for some reason I’m getting empty responses. For all this. I have written some sort of handling, error handling in the app and I would like to test it through selenium to see whether, when I hit certain scenarios, whether the service is down and still the app should be able to respond in a meaningful manner without crashing. What is an option to test this? Any ideas? One option I try, which is a very naive option, is to just simply take the service down. Sure, that’s great.

Hari Krishnan:
I could say take the service down, run the test and make sure that the test still passes, because that particular scenario has to make sure that even without a certain dependency being available, the app is not fully crashing, it has certain degraded sort of performance. Right. So yeah, response timeout, all of those things we got to handle. Now how do I further test this? If I have to perfect compare the response and the delay, sure, that’s the logic I’d had to put it in the app, right, to make sure that if I have a delay, but how do I simulate that in the service itself? Like I cannot go and change the code and I in the real service to say, for this particular request, respond after 5 seconds. For this particular request, respond after 10 seconds. That would not be realistic. Network simulation. Beautiful.

Hari Krishnan:
I’m sure most of you would have seen chars proxy, but that’s exactly what we did. Also, we put in a network simulator in between and then the bandwidth throttling so that you can then say, hey, this service is slow to respond. Thereby I could verify if my system is still able to handle that kind of errors and if my app is still resilient for those kind of faults. All this is great. The one difficulty that we did face with these approaches is the fact that they are not programmatically easy to set up within a selenium test. I can do all this a little bit manually, but if I want to put up an automation suite and have this repeatable all the time, and very quickly that was getting very difficult to handle. That’s when we realized the mocks are a lot more, you know, easier to set up and run with. Instead of having to have a network proxy and with charge proxy, I still have to have the real service available, right? And then it’s the my dependency is still not isolated.

Hari Krishnan:
I still want to have all the goodness of my dependency isolation and still have the ability to do fault injection. What do I mean by that? I could have empty state when I make a certain request, I could get back an empty array or an empty response. Maybe I’m doing a product search, I come back empty. The e commerce app cannot like simply say I am bashing, right? It has to say, hey, no products available with the search criteria come back later. It has to be meaningful response. I could handle empty state. I could simulate an error state. What if the service, I mean, God forbid, but you get a 500, what do you do with that? So I could simulate that because the service mark practically scanned responsive, I can simply say for this particular request combination return a 500.

Hari Krishnan:
That’s great. What’s more, I could even do delay simulation. The certain request return after a certain period of time, which is greater than the timeout I have set in the app itself. Thereby I can trigger that particular functionality in the app through my selenium test to make sure that when there is a delay, this app is able to handle it and gracefully degraded to give a meaningful experience to the user. And through all this, even these error scenarios and all, we want to make sure that those error responses are also schema valid. Correct. That’s where I always have this OpenAPI specification over there, which I’m going to be like, which is what is verifying that my stub responses, even the error responses, are in line with the specification that the service mark is based off of. That’s the beauty of sticking to a specification, right? An industry standard spec like OpenAPI, which is accepted across for all HTTP rest interactions, pretty much is a very good way for you to standardize.

Hari Krishnan:
And Saiya, stand up your mock server. Perfect. So moving forward, sure, we been testing the app left, right and center, and we’ve seen a lot how we could use API specifications to isolate this app from the remainder of the system. But then we can’t live in the dream world, right? I need to test the rest of the system. Also, let’s shift focus to the servants itself. And how do I test this service? Now, this service itself is interesting because if I put that under the lens, this is my system under test, I need to write an API test for it. If I’m writing an API test for it, what are my dependency as a DB? And it’s got a Kafka dependency. And like somewhat some of you already highlighted, there’s a subnet issue with Kafka standing up a local database server.

Hari Krishnan:
All of that is a lot of difficulties that we went through. Let’s take those one by one, focus on the DB. What’s the first thing we could do? I could do an in memory database, which is fairly easy to think of, the first solution that most of us came up with. But what are the difficulties that you have faced, if I may put it to the audience, by switching a real database with an in memory database like HSQL DB, any experiences that you would like to share? It’s very easy to say, right, because just switch it with HSQL and we’re all in dreamland. It’s not as easy as it sound. Empty response due to memory DB issue compatibility. Yeah, thanks a lot Anand for bringing that up. Beautiful.

Hari Krishnan:
So most of the databases which are in memory may not support the dialect of the SQL that you’re using. For example, if you are right to a certain vendor, and by chance you are using any dialect that is very specific to the vendor, the in memory databases may only be able to support ANSI SQL, right, which means you cannot practically have your queries which are built out to talk to that particular vendor specific database, to talk to an in memory DB. Because the first immediate practical issue that you face. So how did we solve it? Like we have tried using test containers to an extent, right? Test containers are beautiful. I could spin up my sequel or oracle or any other database here that you can think of inside a docker. And programmatically also, that’s the beauty of test containers. I love the approach, but then there are certain constraints that we were facing in this particular project that we were working on, and I think that’s a problem that I’ve seen across several other applications also, which is the DB size, some of these databases, the dumps that we used to get even from a staging environment, is quite large. And just to load that up into a test container and have it running, first of all a slow and second, sometimes it’s not even practical to have that kind of file size sitting around.

Hari Krishnan:
And how often do you take that? Maybe I could take one once every day, once every week. But nevertheless, you’re still waiting for that problem to happen where the DB dump that you’re depending on for your test setup is probably going to be out of sync with the real database and thereby again the same problem. Right? My mock without sync with what is the real setup. That’s not a great place to be. Then what we were thinking is, how do you simplify this? I did not want a very heavyweight test container with two gb database running inside of it. So all very complicated. Is there a better way to approach this? So that’s when we. Because we were dealing with a spring application, we were able to see that this, there is a data source which talks to the JDBC driver, which then talks to database.

Hari Krishnan:
So this is the layering in the application, right. There is a protocol level which is a JDBC, and then there’s a data source, a spring data source, which is sitting on top of that particular ADBC protocol. So then what we thought is we could potentially just switch out the data source itself for a mock data source and have this talk to a JDBC mock itself. In which case what we’re doing is standing up a wire compatible JDBC mark. Right? A JDBC mock here isn’t very different from an in memory database because this doesn’t care about your dialects. Correct. Your compatibility problem is completely gone. All you are trying to do is given a particular query, what should be my response? What should be my results should that way, are completely unshackled from the particular vendor or anything that you’re talking about.

Hari Krishnan:
You can simulate anything that you need. Plus you also have the advantage of having to record and replace certain JDBC interactions, thereby making your life easy. You don’t have to think about the entire schema being stood up. If you are only testing a handful of scenarios in a certain use case, you could just stub out those queries rather than having to worry about the entire universe of that particular schema and the, er, diagram for it. That’s about how we solve the database mocks. All right, so now let’s come to the next topic. Right, the Kafka asynchronous systems are a lot more complex than the systems we’ve seen so far. At least we were in request response land.

Hari Krishnan:
Everything is very straightforward, you know, syn ack, everything is great. But now we are in async dominion, which is not easy. So we have this Kafka topic. It’s sending a message over there, the system under test, and then that is reaching the analytics. Looks fairly straightforward, right? Practically two systems communicating happily over a pipe and sending messages to each other. What could possibly go wrong? I could send the right message on the wrong topic altogether. I have done this many times as a dev myself. I could send the wrong message on the right topic.

Hari Krishnan:
Still no good. I could send the right messages out of order though. And I still have a lot of problems. Now there are these plethora of issues that you are plaguing are asynchronous systems, right? And these are the hardest of hard problems to solve. Any idea or any thoughts that people in this group have, you know, used, but that be great if you could share. How do you test your asynchronous systems and validate this? Yes, that’s possible. Message might not even reach the destination because of bombarding the application. It’s possible the queue could be backed up and we don’t know what the back pressure settings are.

Hari Krishnan:
We don’t know any of those. What else could go wrong? Even very simple stuff. How is my message even correct? The consumer sure, good idea. But how do I test the interaction? Right? Like am I sending the right message? That’s what I want to work. Schema validation. Yes Jason, any other schema messages should be right. That is correct. Manually check it.

Hari Krishnan:
Anand yes, thanks again for calling it out. But I wish that was so easy man. That’s not as easy. And Tito, schema validation is super important. But again, how do we do it? On what basis? So with that bread and butter which Anand is calling out, let’s move forward and look at what we did see. Yeah, Alankar directly inside the topic. That’s an interesting idea. What are we publishing and who’s publishing it? That’s what we want to look at, right? So we let’s take the system and isolate it.

Hari Krishnan:
The typical diagram that you’ll be seeing so far, you must be familiar with it so far, familiar with it by now. I will mock out the Kafka server itself. And like what some of you said, I could drop, let the application drop a message into the Kafka server. I could pull it off Kafka and verify if it is according to the schema, possible. Correct. And the mock server itself could be based off of certain, you know, it’s just a Kafka server running and it will receive any message that this application is dropping it. But then same problem, the receiver of the service would evolve and it might expect a different schema of message to arrive. Isn’t this familiar? We saw this already somewhere, right? The deja vu moment in terms of when you saw it with HTTPs, the same thing.

Hari Krishnan:
You had a mock server. And that mock server was based off of an understanding that we have of the system. But then the service evolved, so the carpet was pulled under our legs. Now suddenly the setup they are depending on for the test is not true anymore, right? So what do we do? Again, we need a specification in order to have this whole systems to be in lockstep, correct. That’s exactly why here we started using AsyncAPI specification. Now, AsyncAPI specifications is a very, very, you know, interesting area of work right now. Just like OpenAPI is widely accepted and helps like in a standardized rest, HTTP interactions. I think API is trying to bring under its wings all the interactions such as Kafka, Google Pub sub, you know, even to that matter, JMS and MQTT, all of these could be under one specification standard.

Hari Krishnan:
And if we are able to base our, you know, mock server off of the sync API specification, again, we get all the benefits of being able to validate messages against the spec, being able to verify the right channel or the topic to which the messages are being sent. That’s the beauty of having a service virtualization which is based off of the AsyncAPI spec and then like Pawan called out earlier, which also means that you have to keep the other party in line with your mock itself, which means that requires that the other party runs this in KPI specification as a contract test on their side. So Netnet, what I’m trying to call out here as a theme that you might have seen repeatedly appear, is we want to stand up mock servers based on API specifications which are widely accepted. That way we have both parties working off of a single source of proof, right? And have every mock work at a wire compatible level or a protocol level like JDBC or any other protocols like for that matter. Because that way you are at an isolation point where the systems need not be bothered with, right? They can remain as they are and we can precisely find integration points and isolate them. And most importantly, not to write or write code or to stand up a mock server. Because the moment we have to hand roll a mock server, if we write code for the mock server, then the maintenance overhead of that also falls upon us. Instead, if we base our mock servers off of the API specifications, then as the API evolves, as the specifications change, the mock servers evolve with them and bring all the advantages of having schema validations and other pieces also going along with it, and thereby again standing on top of these important specs and these protocols that most of us are familiar with and more.

Hari Krishnan:
I believe this is the way we could, we have been able to try at scale that this approach works and I’ll hand it over to Joel to quickly go over a demo of the same over to you, Joel.

Joel Rosario:
Thanks, Hari. Okay, so as Hari said, I am going to show you how we ran tests in isolation against an application, a real, real life application. Before I do that, let me quickly go over an architecture diagram so that you understand what exactly we are testing. This is a product code application. So what typically happens is a sales executive is going to put in a bunch of customer details into the application, put in certain parameters, coupons, discounts, whatever it is, click a button, generate a quote, download the quote and send it, something like that. And qas and devs of course, will be testing this application. So the way that typically works is whoever’s using it will log into a portal. There’s authentication about putting username and password, which redirects you through the product port application.

Joel Rosario:
And now you use the application and the application in turn has certain backend services, certain database tables that it queries. This is an oracle database we are talking about. And let me just switch the screen share again. I am going to actually take you to the tests now with that as an introduction, I am going to take you to the tests that we wrote for this application. As you can see from the tag, these are just the p one tests. There is more where that came from. This is basically using a pretty cool test framework called testwise, which under the hood is basically using selenium. And yeah, the tests have run.

Joel Rosario:
You can see it running on my screen, just give it a second. And there we go. So I don’t know what’s visible, you know, over the screen share, but there’s a lot of whiz bang flashy movements on the screen on my laptop. And that’s because everything has been stubbed out, everything, you know, the application is completely isolated. There’s not a single query to an API, not a single query to a database going out of the sandbox, so to speak, of my laptop. Everything’s running completely in isolation. And here we go. Took hardly a minute.

Joel Rosario:
The tests have run. Let’s quickly check. We have run a few tests, right, 14 of these have passed. Everything’s green, which is good. The beauty is this all ran locally? Everything ran locally. Not a single call went outside my laptop. But nothing ever is fully baked. Right.

Joel Rosario:
Obviously this is not really where we started. Just give me a second. Yeah, this test ran locally, but where we started with was an integration environment. And the very first thing that we did was basically swap out the user, the user with selenium test was in selenium under the hood, and the team wrote some tests they ran on our machines. We did a demo, and you are all very experienced, seasoned QA engineers. I think you would have guessed what might have gone wrong. Could I have some suggestions? What do you think went on in this demo? How did it end? Did it end well, did it end badly? What might have gone wrong if it didn’t go well, etcetera. But essentially the demo didn’t go well.

Joel Rosario:
It crashed. Whole bunch of things crashed. Turns out that to start off with the database, there were a bunch of entries that had gotten over during testing that had gone on that morning. The authentication server as well had gone down. A bunch of different issues that had happened and I, it didn’t work. And it took us some time to even figure it out because this is an integration environment, wasn’t easy to do that. So I think Hari’s been talking about isolating dependencies through, you know, just a few minutes back, and that’s really the theme of this talk. The answer was to isolate all of the dependencies that you see on screen.

Joel Rosario:
That doesn’t mean we don’t, you know, the application doesn’t need dependent, we just need to isolate them from the real ones. We are of course testing the system under test, which is a product code application in isolation, so that the other issues don’t come up and the test only fails when the product code application is at fault. The very first thing that we did, of course, was get rid of auth. Of course you need auth, but this is not the place where you want to be testing against a real live auth server, because the most interesting authentication related tests are not the ones that necessarily succeed, they are the ones that don’t. You want to be sure that your application is resilient, is able to handle authentication failed various scenarios properly. So you need to be able to take over authentication so you can simulate all possible cases. Step number one was to take care of authentication. Selenium has this very cool utility for injecting JavaScript.

Joel Rosario:
And what basically happens is there’s a redirect, as you would have seen in the architecture diagram. There’s a portal you log into and then the authentication setup is that certain parameters are sent via redirect through the system under JS, which is product code application. The application then validates those parameters. We don’t need that authentication setup, so we take over the creation of these parameters and sending them to the application, which then validates. And this is a handy piece of JavaScript that essentially stubs out authentication completely and enables us to test a bunch of authentication use cases. I’ve just done a few, but there are more and that took care of authentication for us. Any questions or should I move on to the next topic? I’m going to move ahead silent. If there are any questions, you can let me know.

Joel Rosario:
The next dependency that you’d have seen on screen was the database. The database was an interesting one. Turns out that there was a very large in memory cache that the application was sort of populating on the fly whenever it started up. So there are thousands of queries going across from the application through the database system, and the application comes up six or seven minutes later. This is not a very good thing. If you want to run the tests quickly, get feedback locally on your laptop. Six or seven minutes is too long to be able to run the first test. So, some suggestions? Oh, I also see a question from Joe we were using.

Joel Rosario:
We talk about this, we’ll talk about this a little bit further down. Any questions? Any suggestions on what we could have done in this scenario? How do we take care of the fact that an application sending thousands of queries to operate a complicated cache, what could we have done? Thousands of queries, they’re going across to a database setup. The database is an integration environment, so this could be running from any developer’s laptop, hitting the integration environment and returning data, I think maybe waiting for the answer again just fine. Essentially it wasn’t. We tried isolating the database at first just by sort of pre warming the cache. Maybe that was the first thing we tried. Is it possible? But it turns out it’s nothing that easy to figure out when there are thousands of queries. What exactly is in the cache? And the cache itself was humongous.

Joel Rosario:
Lots of, you know, it was like a complicated hash map with lists and objects containing other objects containing other hash maps and so on and so forth. Like that itself was going to be probably an exercise that would run into two weeks. The next thing we thought of doing was maybe running the database locally. The problem with that is it was a really old database, real version of Oracle. If you want to run a database locally for tests, you’re going to want to spin up a clean, fresh instance of the DB every single time. How do you do that? When, say, you don’t have a docker instance. There was no docker image available for this version of the database, so it was really difficult to do that. Say we did that.

Joel Rosario:
Okay, the database file itself ran into GBS, which is a pretty humongous amount of data and itself would have resulted in like a three, four minute or more start time, which again defeats the burp cycle. Your is almost as bad as the original disease. What then do we use Hsq dB? Get out of using Oracle completely. You couldn’t do that. There were non ancestor queries, so we were, we definitely had to use Oracle. What then do you do? You clean up the application. You can’t do that because there are no tests. You can’t refactor the application to write unit tests and isolate the DB that way because there are no tests.

Joel Rosario:
So we ended up with the sort of chicken and egg situation, no database. You can’t isolate the database so you can’t write tests. You can’t write tests because you can’t isolate the database, so on and so forth. And it turns out the answer is pretty simple. Just mock up the JDBC layer. You don’t have to touch the application at all. There’s nothing, nothing to be done. I’m showing you a small snippet of no code to be written here at all.

Joel Rosario:
This is all done through configuration. There is this tool called specmatic which has a JDBC recording proxy. We pretty much put that in the middle, ran the application, ran the tests, everything gets recorded to this directory, uh, and then we took that out of the way once that was done, uh, and pretty much replaced the database drivers locally with these top data source factories here, which are pulling data from the same directory to which everything was recorded. Uh, and everything just works. If the application doesn’t even know, uh, that it’s talking to, you know, uh, something that isn’t Oracle. And it just works off the bat. Just to give you a sense, um, of what it looks like, you know, some expectation fires like a query with some data, and this just runs off the bat, completely isolated. Oracle is not a problem anymore.

Joel Rosario:
The burden of the application simply isn’t a problem. So we solved that problem. And then comes the third problem, another set of dependencies you have seen, which is basically HTTP dependencies. Here’s an example of one of the HTTP OpenAPI specifications. So the good thing is for HTTP dependencies that are very popular and very well known specification formats. I’m sure most of you know about this. There is for rest, there’s OpenAPI, for Soap, there’s Vistal. These are the two kinds of specification formats that, that the application used.

Joel Rosario:
This is openaPi. It’s really detailed. Your paths, your method, if you have query parameters, data types, headers, you could even put limits. So for example, if a particular value can’t be less than, say one or greater than 1000 or something like that, you can put literally all kinds of data in it. Pretty cool. There’s another thing for soap that’s called wisdom web service definition language. And all that we did was basically drop these into this file. I think I have the file open right here, drop these into this file called Specmatic JSON, which basically we just pretty much declared it as stub over here, nothing more.

Joel Rosario:
And with no further code to be written, we get a fully capable, fully faithful stubbed application. So this was the specification that I just showed you. This is an actual instance in the integration environment, but locally on my laptop, I don’t need it. I have a stubbed version of that application I preceded completely. I can set expectations on this, and it’s perfectly faithful because it’s a spec. And because this is the same specification that the backend team is using, I can be sure that when I stub something out, my stubs are not going to go out of sync with the backend application. And by doing this, I’ve been able to stub out all three. Let me just quickly go back to my presentation just to wrap this up.

Joel Rosario:
So we stubbed out auth, we stubbed out the database, we stubbed out the HIV applications dependencies, and now this creates this cocoon. Everything is running locally. Selenium fires tests against the application, the dependencies of the application are stopped out. Everything’s available locally, nothing goes off the laptop, everything’s under control, everything’s really moving quickly. These are all of these basically via protocol mocks. So we definitely know we don’t have to change any code to account for the fact that stuff has been mocked out, that dependencies are being mocked out, and our dependencies are actually using industry standard, leading industry standard specification formats which are also being used by the providers. So this ensures that the stubs are not going out of sync with the providers, which makes them perfectly safe to use. I think I can quickly now answer Joe’s question.

Joel Rosario:
A DB mock that we used in this application was JDBC mock module. Specmatic also has a Kafka mock module. The Kafka mock module here uses the AsyncAPI specification format that we talked about a little earlier and over. Back to you Hari.

Hari Krishnan:
Thanks Joel. I guess that’s pretty much all we had for today. We’d love to take your questions if we have the time, Sahil, or do we head on over to the hangout area.

Sahil:
We do have time and I think Joe has another question about the app under test. Is it run locally or is it running in a CI/CD?

Joel Rosario:
The application under test is running locally. Sorry if I didn’t make that clear earlier. Point the application under test is running locally. In fact, everything that I showed today ran entirely locally on my laptop. This doesn’t require connectivity to the Internet, doesn’t require VPN to be set up or anything like that. That as well as the dependencies, this can also be packaged and in fact for this application we did package it to run in CI, but again in CI it runs entirely locally to the CI server, so there is no talking to any integrated oracle database or any actual downstream dependencies. These component tests run completely in isolation wherever they are, so I hope that.

Hari Krishnan:
Answers those questions while we wait. Again, thanks a lot Sahil for facilitating our talk. And thanks a lot to selenium conf and the panel for selecting our talk. It’s been a great experience. This is my second time presenting here and it’s always a joy to present here. Again, appreciate all the support and again to all the audience, thanks again for your patience, for hearing us out.

More to explore