Next Article in Journal / Special Issue
Integration and Verification Approach of ISTSat-1 CubeSat
Previous Article in Journal
Influence of Polyisobutylene Kerosene Additive on Combustion Efficiency in a Liquid Propellant Rocket Engine
Previous Article in Special Issue
Validation of a Test Platform to Qualify Miniaturized Electric Propulsion Systems
 
 
Article
Peer-Review Record

Hardware-In-The-Loop and Software-In-The-Loop Testing of the MOVE-II CubeSat

Aerospace 2019, 6(12), 130; https://doi.org/10.3390/aerospace6120130
by Jonis Kiesbye 1,*,†, David Messmann 1,†, Maximilian Preisinger 1, Gonzalo Reina 2, Daniel Nagy 1, Florian Schummer 1, Martin Mostad 3, Tejas Kale 1 and Martin Langer 1,4,†
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Aerospace 2019, 6(12), 130; https://doi.org/10.3390/aerospace6120130
Submission received: 11 October 2019 / Revised: 18 November 2019 / Accepted: 26 November 2019 / Published: 1 December 2019
(This article belongs to the Special Issue Verification Approaches for Nano- and Micro-Satellites)

Round 1

Reviewer 1 Report

This paper deals with the very interesting and up-to-date topic of the Cubesats/small satellites verification. In details, very interesting is the “in the loop” approach pursued in this article. In fact, this article can be an interesting example of this approach thanks to a precise description of the adopted methodology and the used models and a valuable selection of the results. I mainly agree with the proposed approach and I believe the In the Loop simulation a valuable strategy for an effective Cubesat verification campaign.

The paper provides a detailed description of a verification and validation (V&V) testing methodology that combines a simulation environment with selected spacecraft hardware elements, to demonstrate software and algorithm functionality with flight hardware. This hardware-in-the-loop (HIL) and software-in-the-loop (SIL) approaches increase the maturity of the Cubesat program.

Going in details:

The paper is organized well into sections. The manuscript does list appropriate and relevant key words. Those are also included in the abstract. The English language is very good, and the paper result easy to be read. Few typos should be removed.

I do see a good chance to publish on this work nevertheless after minor revision. This revision should address these points:

1)      This paper has 28 sources, none of which from Aerospace, few from relevant journals, and a high number of self-citations is observed. In this sense, the hasty review of State of Art analysis concerns me. I think that input of paper with contribution very similar and previously printed with respect to this article should be added e compared with the proposed solution. Some examples are:

Tapsawat W, Sangpet T., Kuntanapreeda S., Development of a hardware-in-loop attitude control simulator for a CubeSat satellite, DOI: 10.1088/1757-899X/297/1/012010 Stesina F., Corpino S., Feruglio L. (2017), “An in-the-loop simulator for the verification of small space platforms”, IREASE, 10(2), 50-60.

Moreover, the contributions of the work described in this article should be emphasized more strongly either in the introduction section or as final remarks.

2)      Real Time operations: How do you guarantee RT for the entire duration of SIL and HIL simulations? Real Time is crucial when flight software and hardware are in real time. In your experience, is Matlab/Simulink ® a valuable tool to support RT?

3)      SIL architecture: please clarify the architecture of SIL configuration. It is not clear if the software is tested as stand-alone process that exchanges data with the simulation unit. If this is the situation, how the flow of flight software is managed? I mean: do you use software semaphores or threads to manage the in-pipe operations between simulation process and flight software?

4)      Models: are the models tailored for different level of fidelity? For the same element of the simulator (e.g. solar cells/panels) do you implement more models that have a different degree of details, selectable according to the type of planned simulation?

5)      GUI: is a Graphical User Interface (GUI) adopted to support the operator activities of simulation setup, execution, real time data visualization and post-processing analyses? Could you provide some detail about the role of the operator and the level of autonomy of the proposed simulator?

6)      Conclusion. Conclusions is limited to a summary and very discussion. I’d like more comments about aspects such as the reusability of the tool in future projects, the boundary of application of the simulator.

Author Response

Dear Sir or Madam,

on behalf of all the co-authors and me, thank you very much for your detailed feedback to the manuscript. We have worked through all the points that you noted about the manuscript with most work focussed on the abstract, the introduction and the conclusion.

Concerning point 1)
We have researched the current state of the art more extensively than in the original and added more references.
We have added a paragraph describing the state of the art to the introduction. Also, we explicitly state the two points that we see as contribution in the introduction now. The beginning of chapter 2 gives an overview on the publications that also describe simulation environments. The first publication's approach is very similar to ours and we did not know about it yet. Thank you very much for this suggestion!
Regarding the second publication you mentioned, I was not able to find the fulltext yet. I asked the university if they could send it to me but did not receive an answer yet.

Point 2)
We use the pace block of Simulink to throttle the execution of the simulation down to real-time for HIL tests. SIL tests are not throttled. With a loop rate of 20 Hz, our simulation does not require high performance and Matlab/Simulink worked fine for this application. When going to higher loop rates for future missions with reaction wheels, we will not use Simulink for execution anymore but only for compiling a plant model that will run on a real-time target computer from dSpace.
We added a longer explanation on the loop rate, on throttling to real-time, and why we execute the simulation with Simulink to section 2.6. Characteristics of the HIL Environment.

Point 3)
Using the SIL method, we execute the algorithm for attitude determination or attitude control in a level 2 S-function. A level 2 S-function is Simulink's way to execute a C algorithm as a Simulink block. Utilizing level 2 S-functions means that the simulation executes the algorithm then, we do not implement them in a stand-alone process. Thereby we neglect all the scheduling and IO access operations that the firmware running on the ADCS does.
We reworked section 2.4 Simulation Approaches so it describes the implementation of the SIL mode more clearly.

Point 4)
We only have one level of fidelity for every model. The user can choose whether to disable (comment-through in Simulink terminology) any of the models. The simulation approaches RS, SIL, and HIL include all models, while IS disables the sensor and actuator models.

Point 5)
We only use Matlab/Simulink for editing and parametrizing the simulation. We did not implement a GUI that is specific to our simulation. The user needs to select the right parameters and settings in a Matlab script. This script calls different function scripts which set the interface emulators to the right mode, issue commands to the satellite's computer, and automatically run a series of simulations, e.g. Monte Carlo or parameter sweep. Post-processing is done with more Matlab scripts so again we do not provide a GUI.
We extended section 2.4. Simulation Approaches a bit to clearly state that the user interacts with the simulation over a Matlab script and Simulink.

Point 6)
The simulation itself is easily reusable, e.g. one Master's student added reaction wheel models and simulated a space debris observation satellite at the Florida Institute of Technology. We will also reuse it for future satellites built in Munich and focus on moving the simulation to a real-time target PC.
We heavily reworked the conclusion to include more concepts for future applications and modifications.

We would be very grateful if you could take a look on the second revision of our manuscript and tell us if we understood your suggestions correctly and if there are more modifications that we need to implement.

Best wishes
Jonis Kiesbye

Reviewer 2 Report

The current article explains in detail the experimental setup used in functional testing of the ADCS and the power budget validation of the CubeSat MOVE-II. The main idea of the experimental set-up is to simulate sensors, actuators, and the space environment models (software) and implement the ADCS algorithms on the actual hardware of the satellite engineering model (actual hardware) and validate the power budget of the satellite under these situations. The experimental setup can be operated in 4 modes. In the Ideal Simulation mode, the control laws are implemented in Simulink without many blocks mimicking the high-fidelity space environment. Whereas in the Realistic Simulation mode, the Simulink blocks that represents the space environment models are included. In the software-in-the-loop mode, the control algorithms are implemented in C++ but not on actual spacecraft hardware. In the hardware-in-the-loop mode, the control algorithms run on the actual hardware of the spacecraft. The article states five applications for this experimental setup. Firstly, the setup can be used to verify the implementation of the ADCS algorithms on the satellite’s hardware. The power budget of the satellite can also be validated using this experimental setup. The additional applications mentioned are training the mission operators with hardware-in-loop simulations which would be more realistic than just the software simulations. The newly developed ADCS algorithms developed for the later missions are also tested in this experimental setup. This experimental set is also used to handle the problems occurred during the actual mission by implementing the proposed solution on the experimental set up before the actual satellite. The disturbances and the models considered in the experimental setup are clearly explained. The experimental set up of the hardware-in-loop set has some problems that have been addressed in the paper and the solutions for the current issues are said to be in progress. 

The paper is interesting and on a topic important to the space community. However, I believe the paper needs to improve. The main concerns I have with the paper are: 

The literature survey is not sufficient. The novelty needs to be stated clearly, with respect to the other ADCS hardware-related work in the literature.  100 samples seems too small to capture the statistical properties of the variables of the simulation.  Pointing accuracy error of 20 deg considered in the simulation is large. 

Couple of other comments are outlined below: 

In the abstract, the line “We were able to transfer simulated sensor data to the satellite and return…” is confusing since the simulated data is transferred to an engineering model of the satellite but not the satellite in the orbit through telemetry that is explained in detail in the paper later.  The terms “IS” and “RS” which stands for Ideal simulation and the Realistic simulation are used much earlier in the paper before they were explained clearly what they stand for which is a bit confusing.  

Author Response

Dear Sir or Madam,

thank you very much for your feedback. It is a great help for the team of authors to hear your feedback on the manuscript and focus on the suggestions you provided.

Concerning the literature survey, we realized that we did not describe the state of the art appropriately. We added more references and reworked the introduction and the first part of chapter 2. The introduction starts with the state of the art now and states the contribution we make afterwards. Chapter 2 gives a more detailed overview over the other simulation approaches we found in other publications.

Concerning the sample size of 100 in the Monte Carlo simulation, I would like to state why we selected this number. The main concern with our sunpointing algorithms is that they do not converge for some ranges of the initial attitude. Since the range of problematic initial attitudes typically is quite large, we feel confident that for a set of 100 initial attitude vectors, at least one will not converge if the algorithm (e.g. spinning sunpointing) exhibits problematic behavior for some initial attitudes. The second reason why we did not go higher, was the time required for simulation. A set of 100 runs over an orbit each requires about one day of computation time depending on the specs of the simulation PC.

The large pointing error can be attributed to the large parasitic dipole moment of the satellite and the use of magnetorquers for attitude control. The parasitic dipole moment is mainly caused by the EPS and transceiver of our satellite which use ferromagnetic materials and have high magnetic emissions during operation. Our magnetorquers have only a small torque capability compared to reaction wheels and can only exert torque perpendicular to the Earth magnetic field vector which makes magnetorquer-based attitude control susceptible to disturbance torques.

Concerning the abstract, I agree that we need to put more attention towards distinguishing between the satellite in space that we are working towards and the satellite in the HIL environment that is the device-under-test. The abstract has been reworked and should be more precise in that matter. I would be happy if you could take a second look at the revised abstract and tell me if it is clearer now.

The terms IS and RS were difficult to introduce, since section 2.2.2. Controller Models simultaneously depends on section 2.2. Simulation and section 2.4. Simulation Approaches. We added one sentence introducing RS and IS in section 2.2.2 right after the reference to section 2.4.

Also, we reworked the conclusion to be more clear on which part of the results it refers to.

I would be most grateful if you could also take a look on the revised manuscript and give us feedback whether we addressed your points of concern appropriately. Maybe you can also provide us with suggestions on what else we need to improve.

All the co-authors and me would like to thank you for the work you invested in the feedback process so far and are looking forward to hearing back from you.

Best regards
Jonis Kiesbye

Round 2

Reviewer 1 Report

Authors have successfully implemented my suggestions to improve the paper that is now ready for the publication.

Reviewer 2 Report

Thanks for addressing my earlier comments. 

 

Back to TopTop