If a testplan with installed plugin is transmitted to another computer with different installed plugins it may happen that the testplan gets overwritten.
Reproduce by:
Create new testplan
Insert teststeps Delay/Dialog/Delay/Dialog/Delay
Save testplan
Open testplan in notepad and replace “DialogStep” with “XXX” + Save
Reopen testplan in OpenTap Editor.
Now only the Delay teststeps are shown and the log says:
13:07:08.919 Serializer XML Line 11: Unable to locate type ‘OpenTap.Plugins.BasicSteps.LX’. Are you missing a plugin?
Problem:
If user now accidently presses Save or the AutoSave is executed then all missing plugins are lost in the testplan.
Better:
Still show the missing teststeps in the testplan list (greyed out) but do not delete them from testplan if user simply has to install the missing plugin.
Originally, we had a goal of making sure the Test Plan opened. There should be no auto-save that occurs, but as you point out, it would be possible to overwrite the changes. I’ve submitted an issue to our R&D team to look at making this more obvious with the goal of prevent accidental changes to the Test Plans.
In my opinion it would be better if the testplan does NOT open before all needed plugins are available or at least editor does NOT make the start button pressable. Imagine an important plugin is misssing like an emergencyCheck plugin. If then a operator starts a testplan without important security teststeps are available it can become dangerous (for DUT and operator).
Understood. On the operator side specifically, with the REST API, we have done some work to dynamically install plugins, so given a Test Plan, we can dynamic create the environment with all the plugins needed. This helps with this issue especially in the case with changing plugins. In that case if the environment (called a Session in the REST API) can’t be created because of missing plugins, it will fail.
This would be a solution if the testsystem is online.
What if operator is execting TestplanV01 and another testplanCreator edits a copy the testplan at another machine and adds new teststeps. if now this testplan is opened at te offline machine you will have security problems if suddenly the new steps are missing.
Yes, I agree. It is does not solve all issues. It is possible to have a local network share drive serve as the package repo, so that is one option. The other that we have seen used, again particularly in production, is to have a “Release” plugin. This is somewhat similar to the DMM plugin you are familiar with, where it bundles multiple plugins, but it also includes the Test Plans. This is a way to help ensure everything that is needed is all bundled in a single package installer. It also makes it easier to check if a system is up to date as there is only 1 package version you would need to check.
I see that .TapPlan files have a <Package.Dependencies> section. Also, the TapPlan contains class types for each step. So OpenTap definitely knows whether or not the test plan loaded alright. Maybe load failures can be handled in a more overt manner.
I have a similar issue where my test plan will not load because it can’t match the serialized name perfectly (like when you rename a class in C#, even though the step itself is unchanged), and OpenTap will then modify items in the test plan, which if saved will cause me to lose complete steps and/or break the links with other steps. I would rather just have it blow up and report an error so that I can either manually hack and rename the relevant items in the test plan, or install the missing (or older) plugin, or some other conscious fix instead of creating hidden problems.
Also, this does not bode well for future test station updates when trying to migrate an existing test plan if OpenTap is this sensitive and breaks this easily. I would rather see the serialized names simplified to just a basic package name and step name instead of using the whole long .NET object name (which requires every part of that name/path to be exact).
@Tim.Cyr on the update side, the test plan should still be able to update even if the version is not exactly the same. You may see a warning that a difference was detected, but it will load property without changing things.
That said, in the case of of missing plugin or the test plan not loading correctly I do agree (and seems others do as well) that it would be better to not do a “best effort” load that alters the Test Plan and instead just fail to load.
I have submitted this as an issue to the Editor to our R&D team.
I agree with your statement regarding the VERSIONING, and have seen this work well. However, I am talking about NAMING, which seems to be overly sensitive, and which I have seen can make refactoring problematic. Obviously the step name itself needs to be the same, but if I even do anything to shuffle things between classes then that blows up. I would think that as long as OpenTap can find a given step name in the relevant plug-in that it should work, but that does not seem to be true.
Yeah I see your point, and I have personally experienced the issue around refactoring. The issue here is that all of the discovery is based on reflection, which takes all of those properties into account. It’s possible something more custom could be done, but my fear would be that would significantly reduce performance at load time. Which for shorter test plans may not be an issue, but for longer test plans would be. So there is some trade off there (at least I think).
I will suggest it to the team. I’m interested if others have similar thoughts or other ideas?
@Tim.Cyr I’ve had to work around the same issues when it comes to refactoring class names / namespaces. It’s like once you pick a name, you’re committed forever!
But I wonder, when renaming a test step class, would adding something like this help maintain compatibility for existing test plans that use the old class name:
[Browsable(false)]
public class MyTestStep_oldName : MyTestStep_newName
{
}
I’d personally be leery of letting OpenTAP decide which step class to load, as I imagine that could come with it’s own host of problems. By using the fully qualified name of the class, OpenTAP can guarantee the correct class gets loaded, even though it’s not refactor-friendly. Pick your poison, I guess.
@david-wsd Yes, I had considered the issue of letting OpenTap decide which class to map to as well. There could be argument about developing your class as generic and using the name to differentiate between multiple DUT’s (DUT1_Open vs DUT2_Open), or going the other way and having a stand-alone class dedicated to each DUT (DUT1.Open vs DUT2.Open). I think it just comes down to coding style and preference, so there would be no good generic solution.
I guess from a high-level I was just expecting more of a “dictionary” style lookup, as opposed to doing full reflection. Although there is also something to be said about leveraging the built-in functions of .NET and using the existing XML serialization functions to push/pull things from disk. I’m looking at the same thing for managing my test parameters.
I guess to explain a bit why this came up for me…I was trying to be efficient and not replicate the same code in multiple classes, so for instance I created a custom DUT class (which inherited the OpenTap DUT class) which contained all of the common custom stuff, and then my DUT objects inherited my custom class instead of the OpenTap DUT class and the unique DUT specific functions were declared there. But OpenTap didn’t seem to like this, and my custom class was exposed (which in retrospect I guess could be fixed with the [Browsable(false)] option??). This was all also related to me trying to figure out where the best place to put my test parameters is (I’m not sure that the DutSettings class is really meant to be extended to contain all of test parameters for the entire test), which is what I’m still fighting with currently. But I will create a separate topic for this instead of polluting this thread any more.
I think the custom base class is a good design strategy. I typically mark such classes as abstract, which prevents them from being shown in the GUI, but Browsable(false) may work as well.
I typically put test parameter properties in the test steps, while the DUT classes contain properties that describe the DUT. But truthfully I don’t use DUT classes much.
@david-wsd , @Tim.Cyr and @LXuser from your feedback our R&D has implemented a solution, that gives a warning and allows to either Cancel the load, Save As a New Test Plan, or Continue. Interested in your thoughts specifically, but also anyone else in general:
Hi,
so Cancel does stop the loading of the testplan. But the messages in the log windows which shows the missing plugins are still available after pressing cancel?
“Save As” then saves the testplan without the missing plugins?
What is the meaning of “Keep”? What does TAP want to keep here? Keep the errors? But when keep is the same as continue then the errors are no more available after continue - so it was not a keep.
I would prefer a solution were it does not lose anything and only marks it as not executable/not found.
We could consider clearing the log panel, but normally this is a running log of all activity. However, it would also give the context of what plugins were missing, if they weren’t missing intentionally.
This would be saving a NEW test plan without the plugins (to prevent overriding the previous Test Plan)
I’ve already proposed updating the button to “Continue” which would be the same behavior that currently exists.
This is a possibility, but potentially difficult to resolve later. For instance, if the steps were loaded as “Disabled” then the plugin was installed that was missing, I’m not sure this contains all info to properly resolve. At least in the similar deserializer used today.
@brennen_direnzo I like this. I agree with @Tim.Cyr that the wording of the buttons might not be intuitive. I think Continue is better than Keep, especially if the message says “continuing might cause…”