Is there any way to tell Editor.exe to start running a given test plan (and step name) from arguments at launch

Hi

I have a test that reboots the machine and needs to resume execution, I have written the test. The only part I am missing is how to launch Editor.exe in a way that it resumes the current step being run (test can handle the logistics of knowing if came from a reboot).

Something like this: ```
Editor.exe -run -testplan:“C:\Path\To\TestPlan.tplan” -step:“Step Name” but it doesnt seem to exist for the editor.exe (I now we can use tap.exe for this)

I also need away in which opentap can save the current state of the testplan run and resume it after reboot.

Any help here.

Hi @jval1984,

you can use the --open argument to open a specific test plan, but running it or resuming it to the point where you ran it last time is not possible at this moment.

adding a --run would not be a big problem, but adding a general “resume” feature would require a lot of development.

If you have a simple test plan some of this can be done as a plugin which skips execution if they already executed the last time - so imagine coding some logic like this into all the root-level test steps in your test plan.

I see I would assume someone else would already have had the need to reboot during a test and then resume the test or at least resume the tesplan execution. Basically my test stress test operation of a feature on reboot. I have written the test with logic to keep the test state and resume and it does work but #1. Testplan must only have a test and test should have no parent (which sucks because i dont want to keep so many testplans separated) and #2 only works on tap.exe because the UI does not auto resume test execution (this suppose to be stress automated tests).

It would be good to add functionality so that the UI can auto resume a test via arguments and also to be able to resume and keeping state of a current testplan running (pass\fail\screen logs) in bot UI and console.

One reason that this particular scenario does not work is that the most common setup is to have the device running the test plan not at the same time be the device under test. So we simply does not see this particular situation very often.

What I think is the most common setup is a “bench computer” running the test plan and then the device under test being separate from it.

I would be interested in understanding , in your case, why does it have to be the same device?