Pyrenn Levenberg-Marquardt (LM) Neural Network Training Algorithm as an Alternative to Matlab's LM Training Algorithm

First – this isn’t an article bashing Matlab – on the contrary, I’ve used and depended on Matlab as one of my many engineering tools my entire career. However, Matlab is not free and it’s not cheap as the commercial cost for Matlab is around $2,000 and $1,000 for the Deep Learning (used to be Neural Network) toolbox. So when there are alternatives for specific tasks, it’s always worth taking a closer look. The Pyrenn LM Feed-Forward (also Recurrent) Neural Network training algorithm can run in Matlab or Octave – or you can run the Python version. And it’s free. Thus if you’re developing Neural Network applications but can’t afford the cost of Matlab, then you can use the Pyrenn LM source code in Octave. Even in Matlab, you’ll achieve better overall performance using the Pyrenn LM training algorithm than if you used the Matlab LM training algorithm.

Most of my Neural Network applications efforts in the past have used Feed-Forward Neural Networks and I’ve always used the fastest training method (since graduating from back-propagation in the early days) which is the Levenberg-Marquardt optimization algorithm. In fact, only 1% of my time on any Neural Network application is spent on the training of the Neural Networks – because the LM method is so damn fast. Most of my time is spent where it needs to be – in the understanding and the design of the training and test sets. I learned long ago, that the architecture is of 2nd or 3rd order importance when compared to the quality of the training and test data sets – these are of 1st order importance.

The LM optimization algorithm has been used reliably, for decades, across many industries to rapidly solve optimization problems – as it’s known for its speed. The only potential downside is the large memory required for large problems (the Jacobian matrices become exponentially large). Fortunately, most of the Neural Network applications that I’ve worked don’t require huge data sets. And typically, if you have a large data set – such as with image processing, the intermediate step is to perform some type of Principal Component Analysis (PCA) such that the primary features of the large data set can be extracted and represented with a smaller data set, which is then more tractable with a Neural Network.

This article discusses the results of testing both the Pyrenn LM and Matlab LM training algorithms on a simple quadratic curve. The summary results are shown below. The section following that is a technical appendix which discusses the details of all of the testing. At the very end of this article are three videos: 1) using the code in Matlab, 2) using the code in Octave, and 3) an informal code “walk-through”. Following the videos is a link to a downloadable zip file which contains all of my source code (and the Pyreen source code) used for the analysis in this article so that you can run it yourself – either in Matlab or in Octave.

Before going any further, you can obtain the Pyrenn library with both Python and Matlab code libraries here – https://pyrenn.readthedocs.io/en/latest/. A big “Shout Out” to Dennis Atabay and his fellow Germans for not only building this awesome algorithm – but doing it in two languages, Matlab and Python. Then again, most Germans are bilingual (at a minimum) so I suppose it’s to be expected. The code is very well commented – but you’ll need to use the Google language translator – German to English.

System Modeled for Bench Testing the Matlab and Pyrenn Neural Networks

A simple test case that can be used to bench test any Feed-Forward Neural Network is the standard quadratic equation as shown below. It’s not complex but it is nonlinear and it shouldn’t be hard to train a Neural Network to “learn” the nonlinear curve properties and reasonably be able to extrapolate, to some degree, outside the training regime.

Simple Quadratic Curve

The actual quadratic curve used for this article is shown below. The blue stars represent the Neural Network training points – the corresponding X and Y coordinates for each point are the input and output training data sets respectively. The red stars represent the test points – note that the test set lies both inside the training area as well as outside of it. This is actually used as Test Case #1 – the farthest “outside” test point reaches approximately 33% beyond the training regime.

Matlab Generated Quadratic Curve for Training and Testing
Testing Methodology and Procedure

Three tests cases were set up for bench testing both the Matlab LM and Pyrenn LM trained Neural Networks. These test cases reached outside the training regime by 33% (Test Case #1), 108% (Test Case #2), and 316% (Test Case #3). The point was to push the Neural Networks hard on the testing (how well do they perform outside the training regime?).

In each of the test scenarios, the Matlab LM algorithm was used to train 10 Neural Networks – the best one, with the lowest test error, was selected to compete against the Pyrenn LM algorithm. In a similar manner, the Pyrenn LM algorithm was used to train 10 Neural Networks, and again, the best one was selected as the competitor.

For Test Case #1 and Test Case #2, this process was also performed for three different architectures: 1) one middle layer with 4 Neurons, 2) two middle layers with 4 Neurons each, and 3) two middle layers with 8 Neurons each. For Test Case #3, only the first and last architectures were used for testing – the reason being that I was running out of time for getting this article finished and posted (my own self-imposed deadline).

Performance Summary

In the plots below, the three types of architectures tested are represented along the X-axis by: (1) middle layer with 4 Neurons, (2) two middle layers with 4 Neurons each, and (3) two middle layers with 8 Neurons each. The Y-axis is the average error for all 10 Neural Networks that were tested for each of these architectures.

Test Case #1 represents a data set that reaches approximately 33% beyond the training regime boundary. Test Case #2 represents a data set that reaches approximately 108% beyond the training regime boundary. And Test Case #3 is “really out there” with a reach of 316% beyond the training regime boundary. Of course – the further away from the training regime, lower performance is expected.

In all cases, the Pyrenn LM algorithm (blue line) far outperformed the Matlab LM algorithm (red line) – the lower the error, the better the performance.

Note that increasing the architecture size of the Neural Network does not lead to increased performance – that is, adding more middle layers and more Neurons in each layer. Smaller is better for this application.

The results generated by the Pyrenn LM Neural Network training algorithm are impressive and, based on my experience in the past, are likely indicative of the level of performance to be expected with more complex systems.

More test details can be obtained by reviewing the Technical Appendix below.

Technical Appendix

The testing process was driven by: 1) increasing the number of outside test points – referred to as Test Case #1, Test Case #2, and Test Case #3), and 2) varying the Neural Network architecture for each of the test cases.

Test Results for Data Set #1

1 Middle Layer – 4 Neurons

In this first test case, a simple Neural Network architecture is used – one “middle” layer with four Neurons – as shown below.

Neural Network Architecture – 1 Middle Layer with 4 Neurons

The results of training Neural Networks with both the Pyrenn and Matlab LM training algorithms are shown below. The red circles on the curve are the target test points – the hope is that the Neural Network will correctly output those points (the output Y coordinate given the input test X coordinate) which are represented by red stars. Even if they are not exact, depending on the overall trend, it can still be considered good performance.

The blue stars are the Neural Network output (Y coordinate for the given X coordinate input) for the training points. The expectation there is that if the training is good, at a minimum the Neural Network will be able to correctly output the Y coordinate training point. If it can’t do that correctly then there’s no point in looking at the test points performance.

Each of the two plots represent the best performing Neural Network, out of a total of 10 – that is, the best one out of 10 generated by the Pyrenn LM algorithm, and the best one out of 10 generated by the Matlab LM algorithm.

Comparison of Performance between Pyrenn and Matlab

As shown below, the accumulated test errors were far less for the Pyrenn LM-trained Neural Networks than those trained by the Matlab LM algorithm. Note that the errors were sorted from lowest to highest.

Matlab and Pyrenn Test Error Curves

The Command Window output from each session is shown below. Note that the difference in errors between the two LM algorithms is between two and four orders of magnitude.

2 Hidden Layers – 4 Neurons Each

In this case, another “middle” layer was added with four more Neurons.

Neural Network Architecture – 2 Middle Layers with 4 Neurons

While the performance of the Pyrenn LM-trained Neural Networks was maintained – the change in architecture resulted in worse performance for the Matlab LM-trained Neural Networks. Each of the two plots represent the best performing Neural Network, out of a total of 10 – that is, the best one out of 10 generated by the Pyrenn LM algorithm, and the best one out of 10 generated by the Matlab LM algorithm.

Comparison of Performance between Pyrenn and Matlab

Once again the accumulated errors for the Pyrenn LM-trained Neural Networks were far less than those of the Matlab LM-trained Neural Networks. Note that the errors were sorted from lowest to highest.

Matlab and Pyrenn Test Error Curves

The Command Window output for each session is shown below. Note that the difference in errors between the two LM algorithms is between two and three orders of magnitude.

2 Hidden Layers – 8 Neurons Each

Again the architecture was modified to have eight Neurons in each of two “middle” layers, as shown below.

Neural Network Architecture – 2 Middle Layers with 8 Neurons

The performance of the Matlab LM-trained Neural Networks continued to deteriorate while the Pyrenn LM-trained Neural Networks maintained good performance. Each of the two plots represent the best performing Neural Network, out of a total of 10 – that is, the best one out of 10 generated by the Pyrenn LM algorithm, and the best one out of 10 generated by the Matlab LM algorithm.

Comparison of Performance between Pyrenn and Matlab

As before, there was a significant difference between the performances of the Neural Networks trained by the Pyrenn LM algorithm and those trained by the Matlab LM algorithm. Note that the errors were sorted from lowest to highest.

Matlab and Pyrenn Test Error Curves

The Command Window output for each of the training / test sessions is shown below. Note that the difference in errors between the two LM algorithms is between two and three orders of magnitude.

Test Results for Data Set #2

For this second test case, the number of test data points, outside the training regime, was increased. Whereas for the first test case, the minimum and maximum test points were (-16, 256) and (+16, 256), the new test range minimum and maximum test points were (-25, 625) and (+25, 625).

1 Hidden Layer – 4 Neurons

In this first test case, a simple Neural Network architecture is used – one “middle” layer with four Neurons. The results of training Neural Networks with both the Pyrenn and Matlab LM training algorithms are shown below. Each of the two plots represent the best performing Neural Network, out of a total of 10 – that is, the best one out of 10 generated by the Pyrenn LM algorithm, and the best one out of 10 generated by the Matlab LM algorithm.

Comparison of Performance between Pyrenn and Matlab

While the performance of a particular Matlab LM-trained Neural Network was good, the accumulated test errors were far less for the Pyrenn LM-trained Neural Networks than those trained by the Matlab LM algorithm (because the majority of the Matlab LM-trained Neural Networks did poorly). Note that the errors were sorted from lowest to highest. One way to interpret the plot is that the Pyrenn LM algorithm generated a lot more high-performing Neural Networks than the Matlab LM algorithm.

Matlab and Pyrenn Test Error Curves

The Command Window output for each of the training / test sessions is shown below. Note the large percentage of Pyrenn LM generated Neural Networks with low test errors.

2 Hidden Layers – 4 Neurons Each

In this case, another “middle” layer was added with four more Neurons. The performance of the Matlab LM-trained Neural Networks deteriorated tremendously while the Pyrenn LM-trained Neural Networks maintained good performance. Each of the two plots represent the best performing Neural Network, out of a total of 10 – that is, the best one out of 10 generated by the Pyrenn LM algorithm, and the best one out of 10 generated by the Matlab LM algorithm.

Comparison of Performance between Pyrenn and Matlab

As shown below, the accumulated test errors were far less for the Pyrenn LM-trained Neural Networks than those trained by the Matlab LM algorithm. Note that the errors were sorted from lowest to highest.

Matlab and Pyrenn Test Error Curves

The Command Window output for each of the training / test sessions is shown below. Note that the difference in the test errors are an order of magnitude.

2 Hidden Layers – 8 Neurons Each

In this case, the architecture was modified to have eight Neurons in each of two “middle” layers. The Pyrenn LM Neural Network performance degraded a little while the Matlab LM Neural Network performance was just slightly worse than the already “very bad” performance with the previous architecture. Each of the two plots represent the best performing Neural Network, out of a total of 10 – that is, the best one out of 10 generated by the Pyrenn LM algorithm, and the best one out of 10 generated by the Matlab LM algorithm.

Comparison of Performance between Pyrenn and Matlab

As shown below, the accumulated test errors were far less for the Pyrenn LM-trained Neural Networks than those trained by the Matlab LM algorithm. Note that the errors were sorted from lowest to highest error.

Matlab and Pyrenn Test Error Curves

The Command Window output for each of the training / test sessions is shown below. Note that the difference in the test errors is an order of magnitude.

Test Results for Data Set #3

For this third test case, the number of test data points, outside the training regime, was increased again. Whereas for the second test case, the minimum and maximum test points were (-25, 625) and (+25, 625) , the new test range minimum and maximum test points were (-50, 2,500) and (+50, 2,500).

1 Hidden Layer – 4 Neurons

In this first test case, a simple Neural Network architecture is used – one “middle” layer with four Neurons. The results of training Neural Networks with both the Pyrenn and Matlab LM training algorithms are shown below. Each of the two plots represent the best performing Neural Network, out of a total of 10 – that is, the best one out of 10 generated by the Pyrenn LM algorithm, and the best one out of 10 generated by the Matlab LM algorithm.

Comparison of Performance between Pyrenn and Matlab

As shown below, the accumulated test errors were far less for the Pyrenn LM-trained Neural Networks than those trained by the Matlab LM algorithm. Note that the errors were sorted from lowest to highest.

Matlab and Pyrenn Test Error Curves

The Command Window output for each of the training / test sessions is shown below. Note that the difference in the test errors is approximately an order of magnitude.

2 Hidden Layers – 8 Neurons Each

In this case, the architecture was modified to have eight Neurons in each of two “middle” layers. The Pyrenn LM Neural Network performance degraded significantly but the Matlab LM Neural Network performance totally fell apart. Each of the two plots represent the best performing Neural Network, out of a total of 10 – that is, the best one out of 10 generated by the Pyrenn LM algorithm, and the best one out of 10 generated by the Matlab LM algorithm.

Comparison of Performance between Pyrenn and Matlab

As shown below, the accumulated test errors were less for the Pyrenn LM-trained Neural Networks than those trained by the Matlab LM algorithm. Note that the errors were sorted from lowest to highest.

Matlab and Pyrenn Test Error Curves

The Command Window output for each of the training / test sessions is shown below.

Software Discussion

The following three videos cover the following: 1) running the code in Matlab, 2) running the code in Octave, and 3) a “code walk-through”.

Video #1 – Running the Code in Matlab

The video below shows how to run the software in Matlab. Click on the lower right square icon (next to the sound / speaker icon) to enlarge the video to almost the size of the monitor in order to more easily view it.

Video #2 – Running the Code in Octave

Note that it takes longer to run the Pyrenn LM algorithm in Octave – but the results are similar to those obtained in Matlab. In the example shown below, the run time was approximately 182 seconds (3 minutes, 2 seconds) vs a similar run in Matlab that would take 26 seconds.

However, if you’re using Octave because you don’t have access to Matlab, then the additional training time is a small price to pay.

The plot below, which corresponds to the above test run, shows the results of running the Pyrenn LM training algorithm and using Test Case #1 with the simple, single middle layer with 4 Neuron architecture.

The video below shows how to run the software in Octave. Click on the lower right square icon (next to the sound / speaker icon) to enlarge the video to almost the size of the monitor in order to more easily view it.

Video #3 – Code Walk-Through

The video below is an informal “code walk-through” of the Matlab functions. Click on the lower right square icon (next to the sound / speaker icon) to enlarge the video to almost the size of the monitor in order to more easily view it.

Software Download

The software (Matlab and Pyrenn source code and directories), as a zip file, can be downloaded from the link below.

A Lesson in Perseverance: Development of a Prototype AI Neural Network Helicopter Control System

It was in September of 2001 that I was on leave-without-pay from my job at Northrop-Grumman, my wife was on doctor-ordered bed rest because our 3rd son was trying to “arrive early” (and we had two other toddler sons, ages 4 and 2), and I was working 12+ hours a day in my garage on a special personal project. The project was focused on demonstrating that a Neural Network attitude control system could successfully stabilize a radio-control helicopter along the roll and pitch axes. I’d created many successful Neural Network applications in simulation in the past but this was the first time that I was attempting to implement a Neural Network solution for a complex hardware-driven, real-world system.

The basic objective was that, given a set of commanded values for roll and pitch attitude, the helicopter Neural control system would solidly maintain the helicopter at those attitudes – thus it should be able to hover in a stable manner (if the commanded roll and pitch angles were appropriate for hover). The intent was that for a flight test, I’d preset the commanded roll and pitch values in the flight computer – they would be a “guesstimate” at first (a slight negative roll angle knowing the effect of the tail rotor force). The expectation was that while the helicopter was in the air, I could update the commanded roll and pitch values via the keyboard until I found a stable position – the Neural roll and pitch controllers would keep the helicopter at those commanded attitude values. The laptop flight software would save those values for the next flight.

I’d already spent a lot of time on wiring up the system, designing the Neural controllers, “flight testing” on the test stand, etc. but the final success of stabilized flight in my backyard seemed to elude me. My Northrop boss had been great in giving me the time off (leave-without-pay – as I’d already burned up all my vacation time – but held my job for me) – but he kept asking “when are you coming back?!?”.

So it was on a Friday that I called him – “I need two more weeks – after that, no matter what, I promise that I’ll be back”. That was, of course, two more weeks without pay. But Northrop management had bent over backwards to accommodate my insane endeavor.

While I told him that it would just be two weeks, in my mind I assumed that at this rate, it would be at least 3, maybe 4 more months until the goal was achieved – and of course I’d be working like a mad scientist in the evenings and during the weekends (and I couldn’t neglect my family – they needed my time as well). So it was all quite a bit depressing.

Nevertheless I worked all day Saturday (14 hours), took Sunday off (sometimes you have to step away and get new perspective), and worked another long day on Monday. On Tuesday morning … the Neural control system was successfully stabilizing the helicopter in flight in my back yard – as shown in the video below. So instead of being months away, I was only 3 days away from the successful test flight. It was on Monday that I’d found what I thought was the problem – and proved the solution Tuesday morning in my backyard.

The Lesson – We Never Know How Close We Are to Success

The lesson here was that on that Friday, I believed that I was still months away from getting the system to successfully hover in free-flight (not on a test stand) – yet in reality I was only 3 days short of the objective. It was a reminder to me that we can never give up – because we never know how close we are to reaching our objective. Many people quit too soon – never knowing that they were just hours or days away from achieving their success.

The solution that locked in the successful first flight is discussed at the end of this article.

Free-Flight Testing

After performing a few more flight tests in my backyard with extended training landing gear – and being reasonably confident that the Neural control system was very stable, I asked an RC pilot friend to help me test the system in an area out in the country. The objective was to test the helicopter Neural control system at 10-30 feet altitude with no safety gear – and simply let it hover for extended periods of time (10 to 20 minutes). These tests would solidify my confidence level regarding the performance and stability of the Neural control system.

The experienced RC pilot would be on hand to take control of the helicopter, via RC transmitter, if an anomaly occurred. There was a safety switch at the end of the tether on the ground (the safety switch was wired to the helicopter via the tether) that gave the RC pilot full control over the helicopter for emergencies or only partial control during the testing of the Neural control system. In this mode the pilot just controlled the throttle / collective and tail rotor, so as to be able to raise and lower the helicopter in altitude while the Neural control system stabilized the helicopter about the roll and pitch axes.

The following videos are of the subsequent flight tests that we performed out in the country – the purpose was to continue to test the stability and performance of the Neural control system. Each video starts and ends with the Neural control system actively stabilizing the roll and pitch attitude.

Flight Test #1
Flight Test #2
Flight Test #3

Technical Background

The rest of this article goes into detail on the effort that was involved to make this happen – Neural controller design, avionics integration, problem resolution, etc.

Neural Network Control System

The original idea had been to demonstrate that a closed-loop Neural Network attitude control system could easily stabilize a helicopter (ergo a real-time flight control system) about the roll and pitch axes. It’s not an easy problem – try to hover an RC helicopter if you’ve never done it. Typically a “newbie” will tend to overcompensate on the joysticks and cannot maintain stability about the roll and pitch axes if his or her life depended on it. The best way to learn is on a computer simulator before trying the real thing – then the many crashes one will experience, during the learning process, aren’t a problem (better than destroying an actual RC helicopter).

Thus I had to break it down into a classical control problem – plant, feedback error, compensation signal, etc. And that’s before, of course, bringing hardware into the equation.

Control System Diagram

In the classic control diagram below, the system being controlled is the roll axis attitude error and roll rate of the helicopter (and this example applies to the pitch axis mode as well). The objective of the control system is to be able to quickly, and in a stable manner, zero out the roll attitude error and roll attitude rate.

The “plant” is the helicopter itself – specifically the helicopter dynamics. Starting on the left in the diagram below, a fixed commanded roll attitude value (could be positive or negative) is added to the actual roll attitude value to produce a “roll error” (the difference is the error). This “roll error”, along with the roll attitude rate, is input and propagated through a controller, which will attempt zero out both the roll error and the roll rate. In this case, the controller is a Neural Network, which then outputs a correction signal (compensation) – the servo actuator rate command, specifically a delta value. This is added to the current servo command value and is then sent to the actuator to update the servo position.

This action feeds into the helicopter dynamics as the main rotor dynamics are affected by the updated servo roll actuator motion (the main rotor disk rolls to the right or left). The resulting change in motion of the helicopter is measured by the attitude sensor. The sensor feeds this information back to the flight software – and the cycle is repeated at a rate of 50 milliseconds (20 Hz).

Classic Control Block Diagram
Neural Network / Hardware Diagram

A general schematic of the entire system, which shows details of the Neural controllers and the hardware, is shown below. A Crossbow (the company was acquired by Moog, Inc. in 2011) Attitude and Heading Reference System (AHRS) was mounted on the front of the helicopter – it provided stabilized roll and pitch data to the control system. Note that I used an RC tail rotor stability device to keep the tail steady – the main initial focus of the project was on the helicopter roll and pitch axes (one problem at a time). The measured roll and pitch attitude outputs were provided, at a rate of 20 Hz to two Neural Network modules – one performed roll control and the other performed pitch control. Each Neural Network then output the required correct servo motor step value given the attitude error and rate values.

It’s important to note that both Neural Controllers were identical – that is the Neural Network developed from roll test data was also used to control the pitch axis. Thus the Neural roll controller – that learned from the roll dynamics only, also easily handled the very different pitch axis dynamics. So while there were two Neural controllers – one for roll and one for pitch – they were identical.

Basic Avionics / Electronics Configuration

In the above diagram, the Crossbow AHRS is colored gold – this was a later model. The image was taken from their website (many years ago) for illustration purposes. The actual AHRS used in this effort was colored black as you’ll see in the hardware images further down in the article.

Flight Test Schematic

The field flight test schematic is shown below. The flight software – including the Neural Network controllers, was coded in C (using a Borland C compiler), running in Dos 6.22 on an HP laptop. The 90-foot tether, which connected the laptop with the helicopter, contained two RS-232 cable sets – one received the data from the AHRS (on-board the helicopter), and the other sent the updated servo commands to the helicopter.

In the laptop, an input text file contained the preset commanded roll and pitch attitude values for the Neural control system which would attempt to maintain the helicopter at these preset commanded attitudes. While the helicopter was in the air, if it started to drift to the left for example, I would incrementally increase the commanded roll, via arrow keys on the keyboard, until the helicopter stopped and maintained a reasonable hover (so that it didn’t drift all over the place). The flight computer would save the updated values for future flights. So typically after one flight, I didn’t have to update the commanded roll and pitch values as the natural hover points had already been established.

Helicopter System Flight Control Schematic

Development Laboratory – My Home Garage and Office

All of the development was performed in my home – specifically in my home office, my garage, and my backyard. All of the costs came out of my personal funds as well – the RC helicopter, the avionics equipment, laptops, tooling, etc. The most expensive single item was the Crossbow AHRS at just over $4,000 (remember that this was in 2001). Try convincing your spouse of the value of purchasing a small black box, for $4,000, that appears to do nothing and is not useful inside the house!! Nowadays these kinds of systems can be purchased for just a few hundred dollars.

Flight Test Stand

A flight test stand – upon which the helicopter would be mounted, was used in the development of the Neural controllers and was also used for preliminary testing. The image below shows the test stand with some explanations of the parts.

Helicopter Test Stand

The image below shows the helicopter during a particular test (airframe test only) on the test stand. The test stand served several purposes, included vibration testing of the AHRS, and for generating roll profile data for creating the Neural Networks that would be used as the roll and pitch controllers.

Helicopter Undergoing Testing on Aluminum Test Stand
Flight Computer – My HP Laptop

There were two laptops dedicated to the effort – one made by HP and the other made by Compaq (this was just before the time that the two companies merged). The HP laptop was used as the flight computer for controlling the helicopter (laptop on the ground communicating with the helicopter in the air via a 90 foot tether) while the Compaq laptop was used for bench testing avionics and other hardware components.

In the image below, the Compaq laptop is shown performing a test of the Crossbow AHRS unit.

Electronics / Avionics Test Bench Laptop
Avionics Integration – Phase-1

The image below shows the initial avionics set-up – yes it’s very primitive but when you’re doing something like this on your own – with your own funds, on your own time, and just need a prototype – it’s sufficient. And while I tried to be careful, I did make some mistakes – one time I miss-wired the power and ground leads and burned up a Pontech servo controller board – that was a bad day.

The image below shows the basic layout.

Electronics Layout

The image below shows a different perspective with the Crossbow AHRS unit attached. However, after doing a lot of testing with the helicopter on the test stand, I realized that the AHRS was going to need to be isolated from the airframe vibration. Needless to say, it can be unnerving having a helicopter’s main rotor disk spinning at around 1,700 rpm in your garage in close quarters (yes a 5-6 foot diameter main rotor disk spinning that fast can take your head off in a split second).

Electronics / Avionics Layout
Avionics Integration – Phase-2

In this phase, because of balancing issues, I decided to attempt to make the avionics integration more compact (move the weight closer to the main rotor shaft) – the revised layout is shown below. In addition, the AHRS was mounted inside a metal box with foam pads to provide the previously discussed required vibration isolation. Yes it looks like a mess but everything was tied down pretty solidly – I just needed it to work for field testing.

Updated Avionics Configuration
Helicopter Flight Test Configuration

Since I wasn’t an experienced RC helicopter pilot, for backyard flight testing I used “trainer landing gear”, so that if I had to take control of the helicopter near the ground, I wouldn’t overcompensate on the joysticks and flip the helicopter over on its side (a catastrophic situation since the main rotor blades would hit the ground while turning at 1,700 rpm) – likely the extended landing gear skids would “catch” the helicopter and give me a chance to recover and upright the aircraft.

Helicopter with Trainer Landing Gear

Neural Controller Design

What are Neural Networks?

The following is a definition from Wikipedia which I think is reasonable:

Artificial neural networks or connectionist systems are computing systems vaguely inspired by the biological neural networks that constitute animal brains. Such systems “learn” to perform tasks by considering examples, generally without being programmed with task-specific rules.

For this project, Feed-Forward Neural Networks were used – an example is shown below with two inputs, one middle (or sometimes called “hidden”) layer, and one output. The lines connecting the different nodes are called weights or gains. For example, the input to P1 (P = Processing element) is the sum of input X multiplied by w1 plus input Y multiplied by w2. Mathematically speaking, each processing element is a hyperbolic tangent function whose minimum and maximum values asymptotically approach -1 and +1 respectively.

The “learning” or “relationship mapping” is contained in the architecture of the processing elements and the interconnected weights / gains. The ability to learn nonlinear systems is derived from the nonlinear nature of the hyperbolic tangent processing elements.

Feed-Forward Neural Network
Why Use Neural Networks for Control?

Well – what is the purpose Artificial Intelligence?

The purpose is to each a “system” how to not only perform a task or series of tasks but to also create effective new solutions for circumstances for which it was not trained.

The automotive robotic systems shown below are very complex – however, they can only perform very specific tasks for which they are programmed.

Programmed Robotics
Programmed Robotics

The idea with Artificial Intelligence is just that – the system has some kind of intelligence that enables it to make decisions for situations beyond its training regime.

Intelligent Robotics

I’d already had a lot of experience applying Feed-Forward Neural Networks to a variety of simulation and image-recognition applications with amazing success – thus this application was just the next step. Neural Networks could be used to handle more complex helicopter control problems such as handling a sling-load, maintaining stability in very strong gusting / turbulent winds, etc.

Training the Neural Network from Transient Response Example

The basic concept was to use a transient response (rapidly decaying sinusoidal wave) as the “behavior to learn or emulate” to build an example training data set for the Neural Network. In other words, teach the Neural Network that it should quickly dampen out attitude error and drive attitude rate to zero in the process. Examples of various types of transient responses are shown below.

Transient Response Examples

The helicopter transient response data (relationship between the servo actuator command profile and the response of the main rotor disk which is measured by the AHRS unit) would be generated in the following manner:

1) Mount the helicopter (with avionics gear) on the test stand.
2) Bring the helicopter up to full power (just enough throttle / collective for takeoff).
3) Run a transient response curve through the roll servo (from the laptop computer) in order to get a decaying sinusoidal motion of the helicopter as shown in the image below. The AHRS unit would measure the roll profile which would be recorded by the laptop computer.

Sinusoidal Roll of Helicopter on Test Stand for Generating Training Data

Once the data is recorded, an area of the data is sectioned off for training and the data is curve-fit. The illustration below explains the objectives in setting up the training data.

Training Data Snapshot

The raw transient response roll profile – generated while the helicopter was on the test stand, is shown below.

Raw Helicopter Transient Response Data

The next step was to select the training data window, as shown below.

Windowed Training Data Region

The final step, before scaling for training, was to move the “target” roll value to meet the actual settled roll value and thus have a zero error condition at the end of the training set. As shown below, the error (in gray) was measured from a horizontal line just slightly above the zero attitude angle line.

Illustration of Roll Error and Roll Rate Training Data Curves
Training Algorithm

For this effort I used the 1998 Matlab Neural Network toolbox (running on a 32-bit Windows operating system) – specifically the Levenberg-Marquardt (LM) training algorithm for Feed-Forward Neural Networks. As background, the Levenberg-Marquardt optimization algorithm is used industry-wide to solve all types of optimization problems quickly. It is known for its ability to produce robust, optimal solutions much faster than other similar algorithms. To put it mildly, it blows the doors off of all other training algorithms for Feed-Forward Neural Networks.

These days I still use Matlab’s LM training algorithm – it’s now part of the “Deep Learning” toolbox. But in addition I’ve started using the Pyrenn Levenberg-Marquardt training algorithm as well. Here is the link to their site with downloadable Python and Matlab code – https://pyrenn.readthedocs.io/en/latest/. In fact, my next blog article will discuss an effort I did recently to compare performance of the two LM training algorithms.

Performance Shaping Technique

A special technique, that allows the user to adjust performance in real time (that not many people know about), is called “Performance Shaping”. It gives the user the ability to “dial in” various degrees of performance depending on the desired or changing performance requirements. This feature adds a measure of additional adaptability for changing conditions (especially for those for which the Neural Network was not trained).

The Performance-Shaping (hereon referred to as PS) capability is integrated by adding two converging lines that provide an envelope around the transient response. This tells that Neural Network that the PS values drive the required performance. Thus when the Neural Network is in active operations (post-training of course), it can be commanded to increase (tighten) or decrease (loosen) performance by changing the PS input values.

Fundamental Concept of Performance-Shaping

An example of how the PS technique was implemented in the helicopter controllers is shown below. Under normal conditions in this phase of testing, I never needed to adjust the PS parameters – but the idea was that down the road, in situations like gusting winds, the PS capability might be useful.

Diagram of Neural Networks with Integrated Performance-Shaping

I’d previously developed this technique on a simulation of the classic cart / inverted-pendulum simulation (used in academia to understand coupled, nonlinear control problems) – and it worked very well (amazing actually). An illustration of the classic cart / inverted-pendulum system is shown below.

Classic Cart with Inverted Pendulum Illustration

In the simulation, the PS Neural Network could be commanded to quickly upright the pendulum, and then walk the cart back to the original position (while keep the pendulum straight up). Or it could be commanded to do the opposite – get the cart quickly back to the original position while stabilizing the pendulum in the upright position. So performance was either emphasized for the cart (minimize the displacement error quickly) or for the pendulum (upright and stabilize the pendulum quickly).

The plots below are from the cart / inverted pendulum simulation – the performance curves for varying degrees of PS commands are shown. When the PS values were adjusted to command a highly damped transient response for the pendulum, the Neural Network did just that and walked the cart back to the origin more slowly. When the PS values were adjusted to commanded a highly damped response for the cart, the Neural Network quickly moved the cart back to the origin while taking its time stabilizing the pendulum.

Neural Network Performance-Shaping Modulates Performance between Cart and Pendulum
Problem Resolution

The reason I was stuck on not getting the Neural controller to work (per the story that starts at the stop) was that at the end of the transient response curves, the rate curve did not end at zero – instead there was a small amount of data that reached above zero. For some reason I didn’t notice this and it was on that Monday that I noticed that the rate curve didn’t end with a zero value but instead ended with some positive value (was not very obvious until I looked more closely at the data). So what this did was teach the Neural Network that the target attitude rate didn’t need to be zero but could be some number about zero. And thus the controller was very sloppy and loose (I’d noticed this on the test stand but assumed it was some interaction with the test stand).

So I fixed the problem by manually making the rate end at zero (yes I altered the data near the end to fit the desired condition). Then I built several new Neural Networks with the slightly updated training set and selected the best one based on test stand performance. The following morning is what you saw in the first video – the correction / update worked beautifully.

Hands-On Introduction and Tutorial for Setting up and Running NASA's First-Class Java WorldWind Earth Model Simulation

What Is It?

What is WorldWind? Well let me quote NASA’s site directly:

WorldWind is an open source virtual globe API. WorldWind allows developers to quickly and easily create interactive visualizations of 3D globe, map and geographical information. Organizations around the world use WorldWind to monitor weather patterns, visualize cities and terrain, track vehicle movement, analyze geospatial data and educate humanity about the Earth.

There are three “flavors”:

1) Web WorldWind – to build Web applications in your browser.

2) WorldWind Android – to build Android applications.

3) WorldWind Java – to build standalone Java applications for Linux or Windows.

This article covers the application of WorldWind Java – but the classes are pretty much the same across all three application types. The specific site for the Java application side is here –https://worldwind.arc.nasa.gov/java/

Note that for this article I used an older version of WorldWind and a limited set of terrain files. So if you use the latest / greatest version of WorldWind along with a full set of terrain files, the model will look much better than what is seen in these videos.

Why I think it’s Very Cool

From my perspective it’s a high fidelity Earth model that has a tremendous amount of unique and exciting capabilities and features (trust me – this article doesn’t even scratch the surface – no pun intended – of what this system is capable of doing). The developer is only limited by his or her imagination.

Introduction Video

I would suggest that you watch the two short videos below (I recorded all the videos on my home Tower) – as they will give you a quick idea of the utility of using NASA’s WorldWind Earth modeling and simulation tool. Note that the video capture was performed at 10 frames-per-second so these two videos are not as smooth as they would be if I used a graphics video capture card. Click on the lower right square icon (next to the sound / speaker icon) to enlarge the video to almost the size of the monitor in order to more easily view it.

This first video is a general overview.

This second video focuses more on the aspects of watching a simulation unfold and changing observer perspectives by hand (manipulating the Earth model with the mouse).

If you’re intrigued as a software developer, then continue on with the article and I’ll explain this particular code architecture and how to set up the system (it’s really not complicated). Throughout the article are several more videos, each of which is an informal code walk-through for a specific class. And understand that there are many different ways to set up applications with WorldWind – this is just a simple, straight-forward approach for demonstration purposes.

At the end of the article are all of the files needed to run the project without making any code modifications – simply install the required software tools (Java JDK, NetBeans IDE) and the project and support files, and … run the project. There is also as section that explains how to download the pertinent files to just run the code as an executable (jar file) with supporting files (DLLs, library jar files, and terrain files).

High-Level Architecture and Software Layout

System Discussion

At a high level think of it like a physical game with game board and pieces. The Earth model is the game board and the model objects are equivalent to the game pieces that you place on the board. So when you first start out, you have to remove the game board from the box and lay it on the table – in the same way you launch the Earth model and prepare it for the model objects. Then you select the game pieces on the board – in the same way you build / select model objects and put them into the Earth model. That’s it. As a side note, no explicit design pattern was used for this project.

Thus there are two main elements to consider: 1) the Earth model and simulation engine, and 2) the user-designed objects that will be used as part of the Earth model and simulation. In this case, the model objects are trajectory objects (that traverse across the globe) and radar objects (stationary on the globe and track the trajectory objects).

The classes in this project are organized as follows:

Driver Class
This class sets the user preferences, builds the trajectory objects and radar objects, loads the preferences and objects into a data object, builds the Earth model (and passes the data object to it via the constructor) and launches the simulation.

Earth Model
This class sets up the Earth model, configures the selected layers, and drives the simulation.

Model Objects
There are two classes of models: 1) the TrajectoryObjects class, and 2) the RadarObjects class. The TrajectoryObjects class contains the properties and propagation algorithms for objects that move at or above the surface of the Earth model. The RadarObjects class contains the properties for stationary objects on the surface of the Earth model that track the moving objects.

Code Diagram

The block diagram is shown below. The driver class EarthProj does the following:

1) builds the trajectory and radar objects and sets their user-specified properties,

2) builds the ScenarioSettings object (this is the data object) and sets the user-specified parameters as well as loads the trajectory and radar object arrays,

3) builds the EarthView object, and

4) launches the demonstration simulation (via an EarthView class method).

Class Description

The following is a set of more detailed descriptions for each class.

Video Walk-Through of the Main

The following video is high-level and is a walk-through of driver class EarthProj and one of the methods of class EarthView. Basically it explains the user settings and the start-to-finish process for setting up and running a simulation.

Code Discussion

Class TrajectoryObjects

The following is an informal code walk-through of the TrajectoryObjects class. Click on the lower right square icon (next to the sound / speaker icon) to enlarge the video to almost the size of the monitor in order to more easily view it.


Class RadarObjects

The following is an informal code walk-through of the RadarObjects class. Click on the lower right square icon (next to the sound / speaker icon) to enlarge the video to almost the size of the monitor in order to more easily view it.


Class ScenarioSettings

The following is an informal code walk-through of the ScenarioSettings class. Click on the lower right square icon (next to the sound / speaker icon) to enlarge the video to almost the size of the monitor in order to more easily view it.


Class EarthView

The following is an informal code walk-through of the EarthView class. Click on the lower right square icon (next to the sound / speaker icon) to enlarge the video to almost the size of the monitor in order to more easily view it.

This next video is a demonstration of the JDesktop – which acts as a desktop upon which other JPanels can be mounted and moved around. This allows the developer to build a Swing setup that has flexibility in that the panels can be moved around during run-time.

Class BuildJPanel

The following is an informal code walk-through of the BuildJPanel class. Click on the lower right square icon (next to the sound / speaker icon) to enlarge the video to almost the size of the monitor in order to more easily view it.

Class CustomOrbitView – Resolving the Clipping Distance Issue

Depending on the observer’s location and the default clip distance settings, at times the view of the far side of a trajectory may be cut off as shown below.

Thus it’s important to give the user the ability to set the near and far clipping distance values. As shown below, the clipping effect can be eliminated.

The following is an informal code walk-through of the CustomOrbitView class. Click on the lower right square icon (next to the sound / speaker icon) to enlarge the video to almost the size of the monitor in order to more easily view it.

Run the Software “Out of the Package”

Instructions for Running the Project Executable (Fastest Setup Time)

If you would like to just run the executable on your desktop – without using the NetBeans IDE – to see the demo simulation, then you’ll need to download the earthdata.zip and simproject.zip files from the links below to your desktop. The assumption is that you’re running a Windows 64-bit operating system.

Once both files are on your desktop, do the following:

1) Unzip the simproject.zip file and then navigate down into the Earth_Proj directory as shown below:

Copy (or move) the Run_Time directory to your desktop and then go into it and it should look like this:

2) Unzip the earthdata.zip file on your desktop (it is over 700 MB in size so it will take 5-10 minutes depending on the speed of your computer) – then go into it and find the WorldWindData directory as show below.

Move the (cut and paste) the WorldWindData directory into the Run_Time directory on your desktop.

3) In the Run_Time directory, double-click on the EarhProj.jar file – as shown below – and the simulation will begin. If you have any problems, feel free to email me at my contact email address at the end of the article.

Instructions for Running Project in NetBeans (Slower Setup Time)

If you would like to run this project software “out of the box” but from the NetBeans IDE (in the event that you will want to start making your own code updates) then then you’ll need to download the earthdata.zip and simproject.zip files from the links below.

Keep in mind that that for this effort, I used NetBeans 8.1, which is a bit old. I have three versions of NetBeans on my Tower – the NetBeans Community’s NetBeans 8.1 and NetBeans 8.2, and the Apache NetBeans Community’s NetBeans 9.0. In this particular case, I meant to use 8.2 but started using 8.1 by accident – it doesn’t matter as the project will work fine in all three IDEs.

Once both files are on your desktop, do the following:

1) 1) Unzip the simproject.zip file and then navigate down to find the Earth_Proj directory as shown below:

Move the EarthProj_Tower directory to your desktop.

2) Unzip the earthdata.zip file on your desktop (it is over 700 MB in size so it will take 5-10 minutes depending on the speed of your computer) – then go into it and find the WorldWindData directory as show below.

Move the (cut and paste) the WorldWindData directory into the EarthProj_Tower directory on your desktop – the directory should look like this when you’re done:


3) If you don’t have the Java Development Kit 8 (JDK-1.8+) installed on your computer, then download it from the Oracle site and install it – the instructions follow below in the Software Tools Requirements section. If it is already installed then skip to the next step.

4) If you don’t have NetBeans IDE 8.1 installed on your computer, then download it from the NetBeans.org site and install it – the instructions follow below in the Software Tools Requirements section. If it is already installed then skip to the next step.

5) Start NetBeans 8.1 and navigate to open the “EarthProj_Tower” project on your desktop – allow the IDE to scan the project files and then click the green triangle (as shown below) and the simulation will begin.

If you want to use a more recent NetBeans IDE, then Apache NetBeans 9.0 will work fine – it is shown below, ready to run.

Source Code Documentation – Javadoc

The formal documentation for the project is contained in the codejavadoc.zip file below. To access the documentation, simply download this file (click on the link), and then unzip it – it will create a “javadoc” directory two levels down. Go into the javadoc directory and click on index.html or drag the index.html file into your preferred browser. If you click on the index.html file, it may open in Internet Explorer and it doesn’t work well in that browser – so if that’s the case then just drag the index.html file to your favorite browser (Brave, Firefox, Chrome, etc.) – either into the main window or into the URL address bar.

The following is an example of what you should see.

Software Tools Requirements

Software Tools

This project was put together with: 1) Java Development Kit (JDK) 1.8, and 2) NetBeans 8.1 IDE, running on a Windows 8.1 Operating System (OS). Keep in mind that this code (the Java source code, the WorldWind and JOG jar files and DLL files) could easily be assembled quickly into an IDE such as Intellij IDEA or Eclipse. This could also easily be run in Linux – the main differences are that in Windows, Dynamic Link Libraries – called DLLs, are used, whereas in Linux the equivalent would be Shared Objects – called SOs. Thus you’d need to get the JOG .SO files for Linux (I actually have them if you need them – just email me).

Java Development Kit – JDK 8

You can obtain the JDK 8 from Oracle’s site at https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html. Note that if you don’t have an account with Oracle then you’ll have to set one up before being able to download the JDK – it doesn’t cost anything (there’s no license fee) but you need to be registered (or you can’t download the installation package).

Assuming you’re running a Windows 64-bit operating system, you’ll want to download the package that’s highlighted in yellow as shown below.

NetBeans Integrated Development Environment (IDE) 8.1

The download link for NetBeans 8.1 is https://netbeans.org/downloads/8.1. I would suggest that you download the largest and most feature-filled package (circled below on the right). Note that if you don’t have a JDK installed, NetBeans will not continue its installation – so make sure that you install the JDK first.

Wrap Up

NASA WorldWind Code Base and Earth Model Data Files

Here are some useful WorldWind sites.

The direct link to NASA’s WorldWind site is https://worldwind.arc.nasa.gov/
The code base (Github repository) is here – https://github.com/NASAWorldWind/WorldWindJava.
The latest release can be obtained from here: https://github.com/NASAWorldWind/WorldWindJava/releases/tag/v2.1.0

Comments or Questions

If you have comments – then please make them here at the end of the blog article. If you have questions that you want to address to me directly, then feel free to email me at mikescodeprojects@protonmail.com.

Using Integrators in Matlab to Simulate the Motion of an Object

This article focuses on two types of integrators for simulating the motion of an object in Matlab (or Octave). Videos and downloadable source code are at the end of this article. Note that a free alternative to Matlab, called Octave, can also be used to run the software and this is covered here and in one of the videos.

If you’re taking on the task of building a simulation from scratch, the first place you start is in deciding what integrator you’ll be using for propagating the trajectory – whether it’s for a simple case (such as this one) or for a complex aerospace system traversing the atmosphere and space itself. The next step – required by the integrator – is modeling the forces that are acting upon the object that you’re simulating. Once you’ve accomplished these objectives, you’ve created the core engine that drives your simulation.

To keep it simple, we’ll look at a 1-Degree-of-Freedom (DOF) system – in this case a spring / damper system, that moves along the X-axis, and is acted upon by an external, time-dependent forcing function. The spring produces a force based on the displacement from its natural state (positive or negative, depending on whether it’s compressed or extended). The damper produces a force based on the velocity of the object (the sign, negative or positive, is determined by the direction of the velocity).

While we’ll be working with Matlab (and demonstrating with Octave as well), one of the integrators can be used in any software development language.

Integrators

ODE45

Matlab has several available integrator functions – the most popular is ODE45 (ODE = Ordinary Differential Equations) which completes the entire trajectory simulation process in one call to the function. An example of its use (from the actual code base in this project) is shown below. It’s very elegant but simple in that everything is accomplished in one call to this function – the entire trajectory set (position and velocity, as well as time profile) is returned in the stateVec array and time vector on the left hand side.

Runga-Kutta 4th Order

The other integrator is the Runga-Kutta 4th order integrator (also very common and popular) – in this case the trajectory is propagated one time-step at a time. As shown in the code example below, the integrator is called for each time step in the simulation and thus the solution is propagated forward with each call to the function. This is not a Matlab or Octave native function and thus has to be hand-coded (or the basic algorithm copied from another source and converted into Matlab’s scripting format).

Pros and Cons for Each Approach
ODE45

The advantage with this approach is that it’s a single call to the function (no iterative calling of the function throughout the entire trajectory as is the case with the Runga-Kutta approach), and it has a variable time-step capability such that it will determine the best time step for each phase of the trajectory. This latter feature frees up the user from trying to figure out what time step size should be used to generate accurate simulation results.

The disadvantage is portability – that is that if you decide to convert the code base to say Java or C++, the Matlab ODE45 function cannot be carried over. Thus you’ll have to either use a built-in Java or C++ integrator function or else use a hand-coded method such as Runga-Kutta.

Runga-Kutta 4th Order

The advantage with this popular and time-tested approach is portability – it can easily be converted in any language. The disadvantage is that it is a fixed time step method which means that you must have a reasonable idea of the value of the time step.

System Modeling

The two key elements are: 1) the integrator, and 2) deriving the force equations acting on the model. The illustration below shows the system diagram and the associated function calls for the spring force (kForce), the damper force (dForce), and the forcing function (fForce). The integrator requires the acceleration values so we simply divide the forces by the mass of the object to obtain the acceleration values.

The forces and acceleration are simply modeled as shown in the above diagram. The previous state (position and velocity) is brought in via the stateVec array and loaded to x and xd. The spring force, kForce, is computed using the spring constant, k, and the displacement, x. The damper force is computed using the damper constant, c, and the velocity, xd.

Note that we have to have reference signs for the accelerations. If the object moves along the positive X-axis, then the spring is compressed and produces a force in the negative direction of the X-axis. The damper force is dependent on the velocity (magnitude drives the force value, the direction determines the sign).

Results in Matlab

Identical results can be obtained when using either integrator as shown below. In this case, the Matlab ODE45 integrator was run with a variable time step, while the Runga-Kutta integrator was run with a very small fixed time step of 0.01 seconds.

Position and Velocity Profiles

Given that ODE45 data is plotted in blue and the Runga-Kutta data is plotted in red, it’s hard to see any differences in the trajectories – they are basically identical.

Time Step Profiles

The time step profiles for each integrator are shown below. Note with the variable time step profile on the left (ODE45), the time step starts out very small because the forces change rapidly in the beginning (due to the relatively high frequency of the forcing function). As the forces change less over time, the time step size is increased.

Of course, the Runga-Kutta time step (profile shown on the right) is fixed so it never changes regardless of the state of the forces acting on the object.

Results in Octave

As with Matlab, identical results can be obtained when using either integrator as shown below. In this case, the Octave ODE45 integrator was run with a variable time step, while the Runga-Kutta integrator was run with a very small fixed time step.

Position and Velocity Profiles

The results are similar to those from the Matlab test runs – it’s hard to see any significant differences in the trajectories – they are basically identical.

Octave ODE45 Function Call

One change had to be made to the ODE45 integrator implementation – that is that the tolerances had to be lowered because it appears that the default values are too high for the example test case (the high frequency of the forcing function requires tighter tolerances in the variable time step algorithm). Thus an ODE45Wrapper function was made to contain the ODE45 function calls for both Matlab and Octave. The difference is that the tolerances are specified prior to the ODE45 call, in Octave, as shown below.


Time Step Profiles

The time step profile is different than that of Matlab but the trend is the same – it starts small and gets larger over time as the frequency of the force profiles decrease.

Videos

For more detailed information, you can watch any or all of the videos below. And as previously mentioned, the complete code base is available for download after the videos sections.

Video 1 – Running the Integrators in Matlab

This video below shows how to run the code when using Matlab. Click on the lower right square icon (next to the sound / speaker icon) to enlarge the video to almost the size of the monitor in order to more easily view it.

Video 2 – Running the Integrators in Octave

This video below shows how to run the code when using Octave. Click on the lower right square icon (next to the sound / speaker icon) to enlarge the video to almost the size of the monitor in order to more easily view it.

Video 3 – Code Walk-Through

The video below is a code “walk-through” if you’re interested in getting some details of the various functions. Click on the lower right square icon (next to the sound / speaker icon) to enlarge the video to almost the size of the monitor in order to more easily view it.

Source Code

The code base for this article (directories and source code) is available for download as a zip file from the link below. Feel free to use this code for your own purposes with no obligation whatsoever to me. However, if you feel that it’s been beneficial to your efforts, please refer friends and colleagues to this blog when appropriate – thank you!!

Wrap-Up

If you have questions or comments, please leave comments – or you can email me directly at mikescodeprojects@protonmail.com

Windows Visual Studio

undefined

This is my first post – nothing exciting – probably more of a rant.

I used Microsoft’s Visual Studio (VS) for about a year to add modules to several C++ projects that involved 1553 communications protocol. Prior to that I’d mainly coded C++ using NetBeans as the IDE with the Cygwin GNU compiler – both in Windows and in Linux. So it took a little while to get used to VS but I got settled and used to it.

But after using it for a year here are the kinds of things that made me not want to use it in the future …

1) Migrating from one version to another – from say VS 2013 to VS 2015, can be a huge pain and a lot of work. It shouldn’t be this hard and I learned from the forums that many others have gone through this nightmare. As an example, there was a library called atl (or something like that) that was used in one of the DLLs – in the newer version the library had changed names and so I had to dig to find what the appropriate library would be for the old code calls.

2) Sometimes it “just breaks” and you have to uninstall and then reinstall it. That can be a major undertaking – the uninstall process can take hours and even go overnight. Even then there are many lone files and directories that are left on your hard drive.

3) I’ve left a project set alone for a month – mind you that it would successfully build before I walked away, but when I came back and performed a build, the compiler would start coughing up errors (libraries / files not found, etc.). This was an installation that was untouched – in fact I’ve never had this happen before with Java or C++ projects (with other IDEs) in the past. I suspect that a Microsoft system update impacted the VS installation or else Microsoft directly updated the installation without my knowing and induced errors.

And there are other stories to tell but suffice it to say that my experience has been that VS is buggy and fragile at best – and I hope to never work with it again in the future. My preference would be to use NetBeans / Cygwin or QT Creator to build C++ projects.

Yesterday I decided to remove several VS versions from my home tower – I’d forgotten about the nightmare of uninstalling these things. After it was chugging for an hour and the hard drive wasn’t doing anything, I decided to see if there was a better way. Thanks to the forums, I downloaded a package called “TotalUninstaller” – a zip file which contained several utilities, ASCII files and executables. I ran one of the executables from the Command (DOS) window and it methodically removing the majority of all of the versions of VS on my Tower – but the process took about 2 1/2 hours (my hard drive was constantly at 100%). I also downloaded VS2010_Uninstall-RTM.ENU.exe to get rid of VS2010 – yes I know that’s old but there were old libraries supporting some of the newer code (not my doing). This is what it looked like when it was done …

undefined

Anyway – if you can avoid using Visual Studio, you’ll be happier and more excited about your coding projects. The birds will chirp, the angels will sing, and life will be good again.

Why do I code? Insanity.

Seriously though – one has to be really into it in order to handle the frustrations associated with learning a new coding language, debugging somebody’s else code, debugging one’s own code that’s been dormant for 6 months, etc.

In the past procedural programming was all I knew (consider that I graduated from Auburn University in 1986 with an Aerospace Engineering degree) – then in 2013 I started teaching myself Object Oriented Programming with Java (really got into building Swing applications) and then from there I picked up C++ as well (no I’m not an expert – the more you know, the more you realize how little you really know).

My favorite language is Java – many years ago, in 1996 to be exact, I picked up a copy of a Java book and had that gut instinct that this language was “it”. However at the time I had a bee in my bonnet about showing that AI (that’s right – it existed way back then and before then) – specifically Neural Networks could fly a helicopter, so that was my focus for the next decade and a half. I’d already applied AI to numerous applications starting in 1990 so I had a pretty good intuitive feel for it.

So this blog is really just to document some of my thoughts, struggles, projects, and possible ideas. It’s more of a personal thing to just “put out there” – I’m not expecting to have a following of any kind. But I’m going to enjoy the journey.