Best way to start from existing Keysight testexec testsystem?

Hello,
if user has an exsiting testrack with lets say for example the standard instrumements likes powersupplies, dmm, arbs, serial ports.
For each instrument there exists a DotNet-DLL with functions to execute VISA write/reads to the instrument
On the windows 10 computer is a keysight testexec installed that executes testplans (*.tpa)
In the testplans are several teststeps which contain actions and on action refers then to on dll-function.

What would be now the way to convert this testrack into a OpenTAP rack?
Do i see it correct that as first step the existing DLLs must be rewritten to so called Plugins?
Or is there an existing ā€œcommonā€-plugin with a teststep that that can open and execute c#-dll-funcions? (parameters for dll name and function name).

Then as next step the existing testexec testplan has to be rewritten in the openTAP editor.

Currently not clear to me: What is the GUI that the final operator has to usw who is testing the DUTs in the production? I think he will not press Start in the TAP-editor.
So is there also the task to create a GUI that calls the existion tap tesplan?

best regards
JG

Generally, we donā€™t think fully rewriting is necessary. Especially if your code is already in .NET. What is needed is a simple wrapper to expose the settings to OpenTAP. The OpenTAP APIs are very lightweight. Public properties become settings and the Run method wraps whatever your ā€œexecuteā€ method is.
This is similar on the Instrument side. The Open method defines how to create a connection and Close is the shutdown, any public method is accessible.
It would be possible to create a simple generator step that could wrap any C# function, however, that will ultimately sacrifice performance and because of the minimal amount of code needed weā€™ve found most customers prefer to take the explicit wrapping approach (of course you could also generate these wrappers).

Correct once you have the plugins, then you would be able to create Test Plans, you can either manually create them, or there is a way to create an Importer/parser that could read in the previous file formats. You can do that using this interface:

You are correct. We donā€™t envision the Editor to be what is used in production. So for, we have found this ā€œOperator UIā€ needs to be fairly custom to the specific usage, so rather than trying to provide a standard, we have focused on simple APIs. There are some examples of the APIs and simple UI that touches a lot of them here:

Of course we also have the REST API if you are looking for more of Web/Remote API

1 Like

OpenTAP users must have plenty of migration stories, at both high (architectures and tools) and low (APIs, data structures, common task) levels. It would be cool to collection best practices into a ā€œlivingā€ migration guide.

4 Likes

@bweinberg Hereā€™s my migration story

Prior to migrating to tap, our test code was also in c#. We had several hundred thousand lines of code, across 3 csproj libraries. The first issue I had to address was the tight coupling across all of our code. Lots of static classes and variables, not much modularity. All the instrument classes, the settings classes, and the GUI were interdependent on each other. The solution was monolithic, it required a large hardware and file configuration in order to function at all, and would not scale up or down very far from that.

I spent several months refactoring, and eventually our 3 csproj libraries were separated into about 20 smaller modular libraries. For example, Iā€™ve got one library for signal generators, one library for power sensors, etc. This was a valuable exercise regardless of tap.

Next, I started developing the tap side (predating OpenTAP). Generally speaking, for each of my 20 libraries, I developed an additional tap library to wrap it, as @brennen_direnzo recommended. This doubled the number of libraries to about 40.

So today Iā€™ve got about 20 OpenTAP libraries, wrapping about 20 non-OpenTAP libraries. I try to keep most of the heavy test functionality/IP in the non-OpenTAP libraries, while the OpenTAP libraries mostly just implement the OpenTAP interface. Not the only way to do it, but works for us.

One of several benefits of our OpenTAP migration has been the scalability. No longer a monolithic solution, we can support way bigger, and also way smaller applications / configs than before. This has allowed OpenTAP to be useful in more scenarios than we first anticipated.

6 Likes

@david-wsd - great migration narrative! Some follow-on questions (if you have a moment):

  • what inspired / triggered your migration and how did you first encounter OpenTAP?
  • what challenges did you encounter during the migration and how did you overcome them?
  • your story above is replete with ā€œlessons learnedā€ - any others to call out?

Thanks!

2 Likes

@bweinberg Thank you!

In 2016 we concluded that our characterization test software had critical scalability limitations, and sought an overhaul. We tried, but ultimately didnā€™t have the available software dev resources in-house. Then in May of 2017, the Keysight team (Audrey Kwan, @brennen_direnzo, @jeff.dralla) visited to pitch TAP. I hadnā€™t considered an external solution until that moment, but it seemed like it could address our problems.

The main technical challenge was refactoring our spaghetti code, or more like an entire Olive Garden. But each incremental change was beneficial, which kept the ball rolling. The next and ongoing challenge has been gaining user adoption. Many users have jumped on board, but some are resistant. A key issue is I havenā€™t provided good training resources. The Keysight training course is good, but of course doesnā€™t cover our internal plugins and usage model.

Sure. A big question of mine when I got started was: how much functionality should I pack into each test step? Iā€™ve arrived at a couple basic philosophies:

  • Test Steps should be designed to be as lightweight as practical.

For example, letā€™s say you need to send a digital signal, then measure a DC current. Those should be two separate steps, not combined into one step. This results in greater flexibility.

My rule of thumb is this: make each test step as small as it can get, without it having to awkwardly pass data to another step. Sometimes steps will need to be rather large in order to satisfy that rule, but most often they can be pretty lightweight. Which brings me to my second point:

  • Test steps should not be secretly passing data to each other

Iā€™ve often been tempted to have steps share data with each behind the scenes, using static variables. But I believe this is outside the general paradigm of OpenTAP, as it results in less flexibility, and strange dependencies between different steps. With a little more thought, I typically find that a better solution exists.

6 Likes

Wow! @david-wsd ditto.

2 Likes

Hi @bweinberg
Iā€™m not going into the detail @david-wsd has done cos Iā€™m writing this on my mobile.
You beat me to the question as after @david-wsd reply to my post, itā€™s evident we all have a lot of experiences to share.

@david-wsd you covered everything pretty much but I started on with the OpenTAP paradigm back in 2002 believe it or not.

Back in the Nokia glory days I worked with many developers on a multitude of systems. One system had the exact same concept as OpenTAP, which meant when I saw OpenTAP for the first time I thought ā€˜these guys must be ex Nokians for sureā€™. Convergent evolution maybe, but I think it does have links to Nokia.

Reading @david-wsd post suggest there are key areas and concepts to define some best practices for.

Iā€™ve never seen the need to wrap existing code because the OpenTAP framework is so easy to implement for instruments.
The most difficult concept is Connections for sure and there are caveats in some scenarios, but once nailed you never look back.
Test steps should try be atomic in the sense they shouldnā€™t try to rely on static variable to share data. In complex systems itā€™s inevitable that this issue will always pop up, so I have core projects that implement interfaces to allow parent/child steps to contractually share data. I have found trying to work around this issue creates a bigger headache as the steps themselves are part of a system for a specific task. Wifi system, cellular system, Bluetooth system. Your core library begins to grow very quickly and results in lighter test steps.
The parent child relationship is critical to implementing a flexible suite of tests.
The biggest issue isnā€™t necessarily to to with the sequencer and testing. I have found I have always had to write my own ResultsListners so I can choos how my reporting software processes the data. MongoDB is awesome and by far the simplest, most flexible and fastest database implement I have found to date. Iā€™ve looked anything else since 2014.

3 Likes

Hi @jason.hicks

Thanks so much for your post today. And thank you for responding so articulately, even on a mobile device :smiley:

It sounds like you do have a lot of helpful history and experience to impart. If you are interested in crafting a longer response in the form of an interview or if you would be keen to write a guest blog at blog.opentap.io, please contact me directly at bweinberg@opensourcesense.com

Looking forward,

Bill W

1 Like

@jason.hicks Lots in common

so I have core projects that implement interfaces to allow parent/child steps to contractually share data.

The parent child relationship is critical to implementing a flexible suite of tests.

I totally agree.

3 Likes