Neural Network Stock Selector

I’ve been developing this code base for about 6 years – even longer in a casual manner. Over the past 6 months I’ve been upgrading the code base from a very old version of Matlab to Matlab R2017b (4 years old but still reasonably recent).

In a nutshell, the system develops Neural Networks to analyze a large number of stock profiles and then predict the movement of the stock prices over the next year – so it’s forecasting for a year time period. The key component is the one in which the Neural Networks carefully select stocks that are highly likely to surge in the upcoming year.

During the code upgrade process, I periodically run full end-to-end tests to make sure the system architecture integrity has been preserved. Below are the results from a test case run this morning. The Neural Network system analyzes a list of companies over 10 years and makes predictions for each year. Part of the process is to “team” the Neural Networks – that is that a stock only makes it on a list if at least a specified number of Neural Networks have selected it. So for a teaming number of 20, the system would show the companies that were selected by at least 20 Neural Networks.

Below is a set of plots from this test run (the forecast time period is 2008 through 2017) – a teaming number of 20 was selected – the companies were used in simulated purchases and sales of the selected stocks.

The vertical yellow bar was added in to highlight the teaming number 20 performance results. The performance of the system is shown below in a table format.

The Neural Network system abstained from selecting any stocks in 2008, 2014, and 2015.  For the other years the team of 20 selected various stocks.  The only bad year was 2017 – the Neural Networks selected two stocks that were sold at an automatic -10% loss limit. The average Return-On-Investment (ROI) for the Dow Jones Industrial Average (DJIA) was 7.9%. The average ROI for the Neural Network system was 43.5%.

The bottom line is that over 10 years, the Neural Network system outperformed the DJIA by a factor of 5.5.  Below is the result of a short script that computed an initial investment of $100 for the DJIA and the Neural Network system – it then adds in the return on investment for each year.

After 5 years, the Neural Network would have put the investor ahead by a factor of 6.1 over the DJIA and by the end of 10 years, the investor would have made 12.2 times the amount by using the Neural Network system.

Below is an example of the selection output for a teaming number of 20 for the year 2016. The aggregate return on investment was 97.6%.

The stock chart profiles for the selected companies, LEE and MTZ, are shown below. The forecast period was 2016.

The current objective is to get the code upgrade finished and then configure it to test for weekly predictions. That is that the system will select certain stocks that should do well over the next week (purchased on Monday and sold on Friday or before). I’ll be running real-time tests – that is that predictions will be published on my YouTube channel so that an absolute time-stamp is attached to the predictions. Then we’ll see how the selected stocks fair.

What are Super Nets?

Let’s start with the example of students in a medical school. There are 1,000 students and the top 100 students (the top 10%) are getting straight A’s because they are bright and they have studied diligently. Would you say that all of these 100 students (the top 10 percenters) will do equally well out in the real world since they all scored similar grades in medical school? Would you say that all 100 students will become world-renowned brain and heart surgeons that create modern and ground-breaking methods of surgery?

No – you intuitively know that only a handful of the 100 students will “set the world on fire” while the rest of the 100 students do good jobs as doctors and surgeons but they don’t do anything Earth-shattering.

Why is that? After all they all scored the same in all of their tests so we’d assume that they’d all set the world on fire, correct? Well of course not because we know that despite their having scored similarly in their tests (A’s), their brains are wired differently and some of those brains are particularly suited for being creative and innovative in the real world. But we can’t measure that capacity in the university (med school) with tests – we don’t find out until “the rubber hits the road” and each of these graduated students goes out into the real world and begins tackling challenges in their fields.

Well the same applies to Neural Networks … First you train many Neural Networks and only select the top performers (each takes the same series of tests – just as with the students in med school). So out of 1,000 trained Neural Networks, perhaps only 10% (100) of them score above a specified threshold. Do you assume that all of these Neural Networks will perform equally as well in “the real world” – the full application domain space of your application? No – you must test them in this application domain space, and … just a handful of Neural Networks will be the “renowned brain and heart surgeons” (speaking figuratively of course) and the remaining Neural Networks out of the original 100 will perform in an average way.

These super high-performing Neural Networks (the “brilliant and innovative brain surgeons”) are called Super Nets. These are the ones that demonstrate the blistering performance outside of the original training regime but you won’t discover them until you fully test the 10% high achieving Neural Networks in the full application domain space.

In the video below, Neural Networks are being trained (using Matlab’s extremely fast Levenberg-Marquardt optimization algorithm) for stock market prediction purposes (specifically to predict a company’s future stock performance based on its previous history – one could call it Neural Network Technical Analysis). The Neural Networks that achieve a prediction ROI (Return On Investment) of greater than 50% are saved as part of the high-performer group (similar to the 10% of the med school student group). Thus when you see the “SUCCESSUL!!” text, this is high-achiever Neural Network that has done very well with a test set of companies.

However, the real test is when these high-achieving Neural Networks are tested against a 10 year rolling forecast data set – that is they must make predictions for each year of a 10 year time span. Those that score the highest ROI with the lowest standard deviation (and there are just a few) are the Super Nets – the Super Star performers.

Autonomous Driving Application

With this application, many Neural Networks would be trained on the sensor inputs (many different images), with the outputs being the appropriate driving commands. The Neural Networks which surpassed a specified threshold of correctly issuing the correct commands would be saved. So for simplicity we’ll say that 1,000 Neural Networks were trained but only 100 Neural Networks scored above the specified threshold.

The next step is to test those high-scoring 100 Neural Networks on the open road in the autonomous vehicle and each Neural Network is tested and scored on performance. From the road tests of these 100 Neural Networks the top 10 performing Neural Networks are derived. These top 10 Neural Networks are considered to be the “final product” – they are the Super Nets which will be performing the autonomous control of the vehicle.

These Super Nets will form a team such that a “consensus solution” is used – all 10 Super Nets are constantly processing the road images and issuing correction commands. However, the solution is taken from the consensus – so, for example, if 7 out of 10 Super Nets agree that a “slow down, turn right” command should be issued, then that is the one selected.

For most cases, we can assume that all 10 Super Nets will issue the same command or set of commands. However, for cases where there are ambiguities (i.e. a situation is encountered for which they were not trained – maybe a tilted road with fog at night, etc.), the teaming approach will produce a good solution since it is by consensus.

A Software Profiler is Your Best Friend

One of the key assets in your suite of software testing tools is the Profiler, and you should get to know it well. The Profiler is “standard equipment” with most software development environments and has a wide array of capabilities to help point out weak areas of the code, to demonstrate the bottlenecks where most of the time is being spent, to find memory leaks, etc.

A few days ago, I needed to use the Matlab Profiler for some code I was working on and decided that it would be a good article because of the amount of run-time that the Profiler helped me save by pointing out a specific function that was, unnecessarily, taking the most time.

I’d been updating some old Matlab code – this particular function processes (parses, manipulates, assembles, etc.) a large number of ASCII text files (over three hundred files with 7-8,000 rows in each file) which can be time-intensive (much slower than just crunching numbers). But in this case, the run time of over 1,200 seconds seemed excessive – as shown below. Keep in mind that this code is running on a relatively new desktop with an SSD drive and an Intel i7-9700 processor with 8 cores – so I expected the function to run faster (a gut feeling).


Given that I didn’t know where to start looking, I decided it was best to run the Matlab Profiler to see if there were any functions that were slowing down the process unnecessarily. The quick way in Matlab to launch the Profiler is to simply have the function opened and then, in the Editor tab, click on “Run and Time” as shown below.

The run-time with the Profiler, shown below, was longer than the original run because of time spent by the Profiler performing its measurements.


At the end of the run, a Profiler summary was generated, and is shown below. mn is the main function in this example – note that Profiler shows that the function dateCheck, called by mn, seemed to be the resource hog as it was used for 1,183 seconds out of the total of 1,473 seconds. So, the first step was to click on mn to dig down into the Profiler trace.


After clicking on the mn function (above) the next level down is shown below. The top line in the mn function diagnostic page (below), shows that [ds] = dateCheck(ds) is taking up 80.6% of the run-time. Thus the function dateCheck, called by mn, is the culprit and the next step is to click on dateCheck lower down in the diagnostic page (see the red arrow below) and dig further down.


The Profiler summary then takes us to the next level down into the dateCheck function diagnostic page – and the line of code, in the dateCheck function, that uses the most resources is at the top of the diagnostic page (shown below). The children functions are shown below that section and the main culprit is the Matlab function datenum (see red arrow below)

So the issue is the Matlab datenum function, which is used in my dateCheck function.


Now we go to my dateCheck function in the Matlab source code file and find the line – currDateNum = datenum(tDate) – as shown below. That is the culprit, which is apparently causing a big drain of resources.


The next step is search the forums for a solution – the question is, why does this function take an excessive amount of time? A quick search found the very useful solution shown below. The answer is that the function datenum works much more efficiently when the date format is specified in the datenum argument list (instead of the function having to figure out the format itself).


With answer in-hand, the next step is to implement the solution – that is, specify the date format in the datenum argument, as shown below.


With the solution implemented, the final step is to re-run the software and see how much time was saved with this solution. As shown below, the run time was 571 seconds vs the original run time of 1,218 seconds!!

Now you understand that the Profiler is your best friend!! And keep in mind that it can not only save you a lot of time with your software runs but it can help you debug other issues as well.


Most software development environments or toolchains have profilers built into them. The example below is for NetBeans running a Java project. In this case I selected specific methods to be tested (profiled) and the percentage of run-time displayed. Profilers, such as Valgrind, used in Linux for C/C++ applications, are commonly used to detect memory leaks.

Good Data Means Fast Neural Network Training Times

The Pyrenn Levenberg-Marquardt training algorithm for Feed-Forward Neural Networks is extremely fast – 0.140 seconds for a Neural controller which must simultaneously balance an inverted pendulum, mounted on a cart, while moving the cart back to the origin – watch the short video below.

In the vast majority of the Neural Network applications that I’ve developed, the training time was a tiny fraction of my time spent on the project. The major time hit on these kinds of efforts is not training time – instead it is the time spent to develop quality and robust training and test data sets (thinking it through and careful analysis of the data – many times this is an iterative process). If this is done correctly, the resultant Neural Networks are built extremely quickly and yield high performance.

Neural Network Performance Shaping Preview

This is just a quick preview of what will be coming on my first Patrons-only post on my Patreon account (sometime in the next 2 weeks) – https://www.patreon.com/realAI. A Neural Network was trained on a single pass of the behavior of a cart with inverted pendulum system being controlled by a conventional controller. The Performance Shaping technique was then implemented which allows the user to command the Neural controller to either quickly minimize the Pendulum angle error (and maintain the minimum error) or quickly minimize the Cart position error. This is a powerful technique that allows you to use a single data set while building in the ability to modulate the performance of the Neural controller in favor of the Pendulum or in favor of the Cart.

The video shows first the Neural controller being commanded to quickly minimize the Cart position error, while keeping the inverted pendulum upright. Then the Neural controller is commanded to quickly minimize the Pendulum angle error – it does this and slowly walks the Cart back to the zero reference point (thus zeroing out the Cart position error). The horizontal red arrows are the Neural controller commanded forces acting on the Cart. A set of plots are shown at the end of the video.

Keep It Simple

Neural Networks don’t always require complex frameworks and other mathematical algorithms to support them – it’s always best to start simple and only increase the complexity when absolutely needed.

A case in point is this Neural Network control system that was designed to control one specific RC helicopter airframe and yet … was able to fly several different types of RC helicopters with different airframes and different powerplants (gas, electric, and jet). In addition, the Neural Network control system could easily handle sling-loads and gusting / turbulent winds – two nonlinear disturbances that were never part of the training and test sets.

The flight software, with Neural Network functions, was:
1) coded in C,
2) used procedural, not object-oriented, programming,
3) was single-threaded, and
4) ran in the DOS 6.22 operating system.

It was uncomplicated yet highly effective. The flight software executed the following functions:
1) Sensor and actuator checks were performed during the start-up mode and the flight software would refuse to execute the take-off maneuver if anything was off.
2) RS-232 messages were received and processed from the onboard RC data link via another IO processor board – these were the pilot’s basic commands such as “take-off”, “hover”, “ascend”, “forward-flight”, etc.
3) RS-232 messages were received and processed from an onboard 900 MHz data link. These were also the pilot commands plus various commands for autonomous flight. In addition, the flight software also performed a telemetry function by sending out flight and system data to the 900 MHz data link so that the operators on the ground could visually monitor the geographic location of the helicopter and the health statuses on the ground control station.
4) All sensor messages – direct RS-232 from the sensor and RS-232 messages from an IO processor board, were processed and the servo actuator positions were monitored.
5) It performed all of the flight control functions such as hover, transition to forward flight and forward flight, velocity-set, take-off, landing and also managed the execution of an autonomous flight plan (setting up the flight modes on its own). Thus it continuously commanded all of the servo actuators.
6) If the datalink was lost for a period of time, the flight software would execute the “Return Home” mode and fly back autonomously to its original takeoff point (including landing).
7) It recorded all pertinent flight and system data and continuously wrote it out to a binary file which could be reviewed later as a diagnostic tool if there were any observed anomalies.

And despite the simplicity, the Neural Network flight control performance was extremely powerful. The Neural Networks easily handle different airframes, different powerplants, gusting winds, etc.

The video (approximately 9 minutes) shows all of the different airframes performing various maneuvers – the same Neural Network control system stabilized and guided each of them.  There are four slides in the beginning and the rest of the video shows flight maneuvers.

This is not to say that you shouldn’t use modern tools and processes – but don’t overcomplicate the process. In the beginning it’s really important to keep things simple and only use what is needed to execute the objectives.

If you’d like to learn about building Neural Network applications, consider becoming a Patron on my Patreon site. I will be posting articles on a monthly basis with specific applications that will include source code, documentation, and video discussions.

New Patreon Site for Learning to Apply AI

My new Patreon site is now up and running – https://www.patreon.com/realAI.

It can be very intimidating when seeing all of the requirements for Data Scientists and Machine Learning engineers (multiple languages, frameworks, etc.). Thus, the intent of the Patreon effort will be for me to help you lose your fear of attempting to use Neural Networks for real-world applications and to get you up to speed on basic methods and techniques. These tutorials will teach you the important core fundamentals that you need to know in order to: 1) understand and code up the application, and 2) form a good understanding of the solution in order to tailor and build a high-performance Neural Network.

The coding language for each project will either be Matlab / Octave script or Java. Eventually Python may be added to the mix. No purchase of tools will be necessary – Octave and Java Integrated Development Environments (IDEs) and the Software Development Kits (SDKs) can be downloaded at no cost from the internet.

The first lesson will be published for subscribers sometime in mid February. I’m excited and passionate about this new path and will do my best to provide a superior and satisfying product for my subscribers. I want you to learn and become cutting-edge AI engineers.

There are two subscription tiers, as discussed below.

Tier 1 ($5 per month):
Access to application description, downloadable source code, and basic instructions for setting up and building the Neural Network solution.

Tier 2 ($10 per month):
The same as Tier 1 with the addition of videos:
– of application and solution code walk-throughs,
– with detailed explanations of the Neural Network training and test data setup processing, and
– on how to learn from the training sessions and improve performance, etc.

Pyrenn Levenberg-Marquardt (LM) Neural Network Training Algorithm as an Alternative to Matlab’s LM Training Algorithm

January 30, 2021 Update: If you are interested in learning the fundamentals of building Neural Network solutions then please take a look at my Patreon site. The first project will be released in approximately 2 weeks (Tier 1: source code and basic instruction – Tier 2: same as Tier 1 but with the addition of video code-walk-throughs, instruction, etc.).

First – this isn’t an article bashing Matlab – on the contrary, I’ve used and depended on Matlab as one of my many engineering tools my entire career. However, Matlab is not free and it’s not cheap as the commercial cost for Matlab is around $2,000 and $1,000 for the Deep Learning (used to be Neural Network) toolbox. So when there are alternatives for specific tasks, it’s always worth taking a closer look. The Pyrenn LM Feed-Forward (also Recurrent) Neural Network training algorithm can run in Matlab or Octave – or you can run the Python version. And it’s free. Thus if you’re developing Neural Network applications but can’t afford the cost of Matlab, then you can use the Pyrenn LM source code in Octave. Even in Matlab, you’ll achieve better overall performance using the Pyrenn LM training algorithm than if you used the Matlab LM training algorithm.

Most of my Neural Network applications efforts in the past have used Feed-Forward Neural Networks and I’ve always used the fastest training method (since graduating from back-propagation in the early days) which is the Levenberg-Marquardt optimization algorithm. In fact, only 1% of my time on any Neural Network application is spent on the training of the Neural Networks – because the LM method is so damn fast. Most of my time is spent where it needs to be – in the understanding and the design of the training and test sets. I learned long ago, that the architecture is of 2nd or 3rd order importance when compared to the quality of the training and test data sets – these are of 1st order importance.

The LM optimization algorithm has been used reliably, for decades, across many industries to rapidly solve optimization problems – as it’s known for its speed. The only potential downside is the large memory required for large problems (the Jacobian matrices become exponentially large). Fortunately, most of the Neural Network applications that I’ve worked don’t require huge data sets. And typically, if you have a large data set – such as with image processing, the intermediate step is to perform some type of Principal Component Analysis (PCA) such that the primary features of the large data set can be extracted and represented with a smaller data set, which is then more tractable with a Neural Network.

This article discusses the results of testing both the Pyrenn LM and Matlab LM training algorithms on a simple quadratic curve. The summary results are shown below. The section following that is a technical appendix which discusses the details of all of the testing. At the very end of this article are three videos: 1) using the code in Matlab, 2) using the code in Octave, and 3) an informal code “walk-through”. Following the videos is a link to a downloadable zip file which contains all of my source code (and the Pyreen source code) used for the analysis in this article so that you can run it yourself – either in Matlab or in Octave.

Before going any further, you can obtain the Pyrenn library with both Python and Matlab code libraries here – https://pyrenn.readthedocs.io/en/latest/. A big “Shout Out” to Dennis Atabay and his fellow Germans for not only building this awesome algorithm – but doing it in two languages, Matlab and Python. Then again, most Germans are bilingual (at a minimum) so I suppose it’s to be expected. The code is very well commented – but you’ll need to use the Google language translator – German to English.

System Modeled for Bench Testing the Matlab and Pyrenn Neural Networks

A simple test case that can be used to bench test any Feed-Forward Neural Network is the standard quadratic equation as shown below. It’s not complex but it is nonlinear and it shouldn’t be hard to train a Neural Network to “learn” the nonlinear curve properties and reasonably be able to extrapolate, to some degree, outside the training regime.

Simple Quadratic Curve

The actual quadratic curve used for this article is shown below. The blue stars represent the Neural Network training points – the corresponding X and Y coordinates for each point are the input and output training data sets respectively. The red stars represent the test points – note that the test set lies both inside the training area as well as outside of it. This is actually used as Test Case #1 – the farthest “outside” test point reaches approximately 33% beyond the training regime.

Matlab Generated Quadratic Curve for Training and Testing
Testing Methodology and Procedure

Three tests cases were set up for bench testing both the Matlab LM and Pyrenn LM trained Neural Networks. These test cases reached outside the training regime by 33% (Test Case #1), 108% (Test Case #2), and 316% (Test Case #3). The point was to push the Neural Networks hard on the testing (how well do they perform outside the training regime?).

In each of the test scenarios, the Matlab LM algorithm was used to train 10 Neural Networks – the best one, with the lowest test error, was selected to compete against the Pyrenn LM algorithm. In a similar manner, the Pyrenn LM algorithm was used to train 10 Neural Networks, and again, the best one was selected as the competitor.

For Test Case #1 and Test Case #2, this process was also performed for three different architectures: 1) one middle layer with 4 Neurons, 2) two middle layers with 4 Neurons each, and 3) two middle layers with 8 Neurons each. For Test Case #3, only the first and last architectures were used for testing – the reason being that I was running out of time for getting this article finished and posted (my own self-imposed deadline).

Performance Summary

In the plots below, the three types of architectures tested are represented along the X-axis by: (1) middle layer with 4 Neurons, (2) two middle layers with 4 Neurons each, and (3) two middle layers with 8 Neurons each. The Y-axis is the average error for all 10 Neural Networks that were tested for each of these architectures.

Test Case #1 represents a data set that reaches approximately 33% beyond the training regime boundary. Test Case #2 represents a data set that reaches approximately 108% beyond the training regime boundary. And Test Case #3 is “really out there” with a reach of 316% beyond the training regime boundary. Of course – the further away from the training regime, lower performance is expected.

In all cases, the Pyrenn LM algorithm (blue line) far outperformed the Matlab LM algorithm (red line) – the lower the error, the better the performance.

Note that increasing the architecture size of the Neural Network does not lead to increased performance – that is, adding more middle layers and more Neurons in each layer. Smaller is better for this application.

The results generated by the Pyrenn LM Neural Network training algorithm are impressive and, based on my experience in the past, are likely indicative of the level of performance to be expected with more complex systems.

More test details can be obtained by reviewing the Technical Appendix below.

Technical Appendix

The testing process was driven by: 1) increasing the number of outside test points – referred to as Test Case #1, Test Case #2, and Test Case #3), and 2) varying the Neural Network architecture for each of the test cases.

Test Results for Data Set #1

1 Middle Layer – 4 Neurons

In this first test case, a simple Neural Network architecture is used – one “middle” layer with four Neurons – as shown below.

Neural Network Architecture – 1 Middle Layer with 4 Neurons

The results of training Neural Networks with both the Pyrenn and Matlab LM training algorithms are shown below. The red circles on the curve are the target test points – the hope is that the Neural Network will correctly output those points (the output Y coordinate given the input test X coordinate) which are represented by red stars. Even if they are not exact, depending on the overall trend, it can still be considered good performance.

The blue stars are the Neural Network output (Y coordinate for the given X coordinate input) for the training points. The expectation there is that if the training is good, at a minimum the Neural Network will be able to correctly output the Y coordinate training point. If it can’t do that correctly then there’s no point in looking at the test points performance.

Each of the two plots represent the best performing Neural Network, out of a total of 10 – that is, the best one out of 10 generated by the Pyrenn LM algorithm, and the best one out of 10 generated by the Matlab LM algorithm.

Comparison of Performance between Pyrenn and Matlab

As shown below, the accumulated test errors were far less for the Pyrenn LM-trained Neural Networks than those trained by the Matlab LM algorithm. Note that the errors were sorted from lowest to highest.

Matlab and Pyrenn Test Error Curves

The Command Window output from each session is shown below. Note that the difference in errors between the two LM algorithms is between two and four orders of magnitude.

2 Hidden Layers – 4 Neurons Each

In this case, another “middle” layer was added with four more Neurons.

Neural Network Architecture – 2 Middle Layers with 4 Neurons

While the performance of the Pyrenn LM-trained Neural Networks was maintained – the change in architecture resulted in worse performance for the Matlab LM-trained Neural Networks. Each of the two plots represent the best performing Neural Network, out of a total of 10 – that is, the best one out of 10 generated by the Pyrenn LM algorithm, and the best one out of 10 generated by the Matlab LM algorithm.

Comparison of Performance between Pyrenn and Matlab

Once again the accumulated errors for the Pyrenn LM-trained Neural Networks were far less than those of the Matlab LM-trained Neural Networks. Note that the errors were sorted from lowest to highest.

Matlab and Pyrenn Test Error Curves

The Command Window output for each session is shown below. Note that the difference in errors between the two LM algorithms is between two and three orders of magnitude.

2 Hidden Layers – 8 Neurons Each

Again the architecture was modified to have eight Neurons in each of two “middle” layers, as shown below.

Neural Network Architecture – 2 Middle Layers with 8 Neurons

The performance of the Matlab LM-trained Neural Networks continued to deteriorate while the Pyrenn LM-trained Neural Networks maintained good performance. Each of the two plots represent the best performing Neural Network, out of a total of 10 – that is, the best one out of 10 generated by the Pyrenn LM algorithm, and the best one out of 10 generated by the Matlab LM algorithm.

Comparison of Performance between Pyrenn and Matlab

As before, there was a significant difference between the performances of the Neural Networks trained by the Pyrenn LM algorithm and those trained by the Matlab LM algorithm. Note that the errors were sorted from lowest to highest.

Matlab and Pyrenn Test Error Curves

The Command Window output for each of the training / test sessions is shown below. Note that the difference in errors between the two LM algorithms is between two and three orders of magnitude.

Test Results for Data Set #2

For this second test case, the number of test data points, outside the training regime, was increased. Whereas for the first test case, the minimum and maximum test points were (-16, 256) and (+16, 256), the new test range minimum and maximum test points were (-25, 625) and (+25, 625).

1 Hidden Layer – 4 Neurons

In this first test case, a simple Neural Network architecture is used – one “middle” layer with four Neurons. The results of training Neural Networks with both the Pyrenn and Matlab LM training algorithms are shown below. Each of the two plots represent the best performing Neural Network, out of a total of 10 – that is, the best one out of 10 generated by the Pyrenn LM algorithm, and the best one out of 10 generated by the Matlab LM algorithm.

Comparison of Performance between Pyrenn and Matlab

While the performance of a particular Matlab LM-trained Neural Network was good, the accumulated test errors were far less for the Pyrenn LM-trained Neural Networks than those trained by the Matlab LM algorithm (because the majority of the Matlab LM-trained Neural Networks did poorly). Note that the errors were sorted from lowest to highest. One way to interpret the plot is that the Pyrenn LM algorithm generated a lot more high-performing Neural Networks than the Matlab LM algorithm.

Matlab and Pyrenn Test Error Curves

The Command Window output for each of the training / test sessions is shown below. Note the large percentage of Pyrenn LM generated Neural Networks with low test errors.

2 Hidden Layers – 4 Neurons Each

In this case, another “middle” layer was added with four more Neurons. The performance of the Matlab LM-trained Neural Networks deteriorated tremendously while the Pyrenn LM-trained Neural Networks maintained good performance. Each of the two plots represent the best performing Neural Network, out of a total of 10 – that is, the best one out of 10 generated by the Pyrenn LM algorithm, and the best one out of 10 generated by the Matlab LM algorithm.

Comparison of Performance between Pyrenn and Matlab

As shown below, the accumulated test errors were far less for the Pyrenn LM-trained Neural Networks than those trained by the Matlab LM algorithm. Note that the errors were sorted from lowest to highest.

Matlab and Pyrenn Test Error Curves

The Command Window output for each of the training / test sessions is shown below. Note that the difference in the test errors are an order of magnitude.

2 Hidden Layers – 8 Neurons Each

In this case, the architecture was modified to have eight Neurons in each of two “middle” layers. The Pyrenn LM Neural Network performance degraded a little while the Matlab LM Neural Network performance was just slightly worse than the already “very bad” performance with the previous architecture. Each of the two plots represent the best performing Neural Network, out of a total of 10 – that is, the best one out of 10 generated by the Pyrenn LM algorithm, and the best one out of 10 generated by the Matlab LM algorithm.

Comparison of Performance between Pyrenn and Matlab

As shown below, the accumulated test errors were far less for the Pyrenn LM-trained Neural Networks than those trained by the Matlab LM algorithm. Note that the errors were sorted from lowest to highest error.

Matlab and Pyrenn Test Error Curves

The Command Window output for each of the training / test sessions is shown below. Note that the difference in the test errors is an order of magnitude.

Test Results for Data Set #3

For this third test case, the number of test data points, outside the training regime, was increased again. Whereas for the second test case, the minimum and maximum test points were (-25, 625) and (+25, 625) , the new test range minimum and maximum test points were (-50, 2,500) and (+50, 2,500).

1 Hidden Layer – 4 Neurons

In this first test case, a simple Neural Network architecture is used – one “middle” layer with four Neurons. The results of training Neural Networks with both the Pyrenn and Matlab LM training algorithms are shown below. Each of the two plots represent the best performing Neural Network, out of a total of 10 – that is, the best one out of 10 generated by the Pyrenn LM algorithm, and the best one out of 10 generated by the Matlab LM algorithm.

Comparison of Performance between Pyrenn and Matlab

As shown below, the accumulated test errors were far less for the Pyrenn LM-trained Neural Networks than those trained by the Matlab LM algorithm. Note that the errors were sorted from lowest to highest.

Matlab and Pyrenn Test Error Curves

The Command Window output for each of the training / test sessions is shown below. Note that the difference in the test errors is approximately an order of magnitude.

2 Hidden Layers – 8 Neurons Each

In this case, the architecture was modified to have eight Neurons in each of two “middle” layers. The Pyrenn LM Neural Network performance degraded significantly but the Matlab LM Neural Network performance totally fell apart. Each of the two plots represent the best performing Neural Network, out of a total of 10 – that is, the best one out of 10 generated by the Pyrenn LM algorithm, and the best one out of 10 generated by the Matlab LM algorithm.

Comparison of Performance between Pyrenn and Matlab

As shown below, the accumulated test errors were less for the Pyrenn LM-trained Neural Networks than those trained by the Matlab LM algorithm. Note that the errors were sorted from lowest to highest.

Matlab and Pyrenn Test Error Curves

The Command Window output for each of the training / test sessions is shown below.

Software Discussion

The following three videos cover the following: 1) running the code in Matlab, 2) running the code in Octave, and 3) a “code walk-through”.

Video #1 – Running the Code in Matlab

The video below shows how to run the software in Matlab. Click on the lower right square icon (next to the sound / speaker icon) to enlarge the video to almost the size of the monitor in order to more easily view it.

Video #2 – Running the Code in Octave

Note that it takes longer to run the Pyrenn LM algorithm in Octave – but the results are similar to those obtained in Matlab. In the example shown below, the run time was approximately 182 seconds (3 minutes, 2 seconds) vs a similar run in Matlab that would take 26 seconds.

However, if you’re using Octave because you don’t have access to Matlab, then the additional training time is a small price to pay.

The plot below, which corresponds to the above test run, shows the results of running the Pyrenn LM training algorithm and using Test Case #1 with the simple, single middle layer with 4 Neuron architecture.

The video below shows how to run the software in Octave. Click on the lower right square icon (next to the sound / speaker icon) to enlarge the video to almost the size of the monitor in order to more easily view it.

Video #3 – Code Walk-Through

The video below is an informal “code walk-through” of the Matlab functions. Click on the lower right square icon (next to the sound / speaker icon) to enlarge the video to almost the size of the monitor in order to more easily view it.

Software Download

The software (Matlab and Pyrenn source code and directories), as a zip file, can be downloaded from the link below.

A Lesson in Perseverance: Development of a Prototype AI Neural Network Helicopter Control System

It was in September of 2001 that I was on leave-without-pay from my job at Northrop-Grumman, my wife was on doctor-ordered bed rest because our 3rd son was trying to “arrive early” (and we had two other toddler sons, ages 4 and 2), and I was working 12+ hours a day in my garage on a special personal project. The project was focused on demonstrating that a Neural Network attitude control system could successfully stabilize a radio-control helicopter along the roll and pitch axes. I’d created many successful Neural Network applications in simulation in the past but this was the first time that I was attempting to implement a Neural Network solution for a complex hardware-driven, real-world system.

The basic objective was that, given a set of commanded values for roll and pitch attitude, the helicopter Neural control system would solidly maintain the helicopter at those attitudes – thus it should be able to hover in a stable manner (if the commanded roll and pitch angles were appropriate for hover). The intent was that for a flight test, I’d preset the commanded roll and pitch values in the flight computer – they would be a “guesstimate” at first (a slight negative roll angle knowing the effect of the tail rotor force). The expectation was that while the helicopter was in the air, I could update the commanded roll and pitch values via the keyboard until I found a stable position – the Neural roll and pitch controllers would keep the helicopter at those commanded attitude values. The laptop flight software would save those values for the next flight.

I’d already spent a lot of time on wiring up the system, designing the Neural controllers, “flight testing” on the test stand, etc. but the final success of stabilized flight in my backyard seemed to elude me. My Northrop boss had been great in giving me the time off (leave-without-pay – as I’d already burned up all my vacation time – but held my job for me) – but he kept asking “when are you coming back?!?”.

So it was on a Friday that I called him – “I need two more weeks – after that, no matter what, I promise that I’ll be back”. That was, of course, two more weeks without pay. But Northrop management had bent over backwards to accommodate my insane endeavor.

While I told him that it would just be two weeks, in my mind I assumed that at this rate, it would be at least 3, maybe 4 more months until the goal was achieved – and of course I’d be working like a mad scientist in the evenings and during the weekends (and I couldn’t neglect my family – they needed my time as well). So it was all quite a bit depressing.

Nevertheless I worked all day Saturday (14 hours), took Sunday off (sometimes you have to step away and get new perspective), and worked another long day on Monday. On Tuesday morning … the Neural control system was successfully stabilizing the helicopter in flight in my back yard – as shown in the video below. So instead of being months away, I was only 3 days away from the successful test flight. It was on Monday that I’d found what I thought was the problem – and proved the solution Tuesday morning in my backyard.

The Lesson – We Never Know How Close We Are to Success

The lesson here was that on that Friday, I believed that I was still months away from getting the system to successfully hover in free-flight (not on a test stand) – yet in reality I was only 3 days short of the objective. It was a reminder to me that we can never give up – because we never know how close we are to reaching our objective. Many people quit too soon – never knowing that they were just hours or days away from achieving their success.

The solution that locked in the successful first flight is discussed at the end of this article.

Free-Flight Testing

After performing a few more flight tests in my backyard with extended training landing gear – and being reasonably confident that the Neural control system was very stable, I asked an RC pilot friend to help me test the system in an area out in the country. The objective was to test the helicopter Neural control system at 10-30 feet altitude with no safety gear – and simply let it hover for extended periods of time (10 to 20 minutes). These tests would solidify my confidence level regarding the performance and stability of the Neural control system.

The experienced RC pilot would be on hand to take control of the helicopter, via RC transmitter, if an anomaly occurred. There was a safety switch at the end of the tether on the ground (the safety switch was wired to the helicopter via the tether) that gave the RC pilot full control over the helicopter for emergencies or only partial control during the testing of the Neural control system. In this mode the pilot just controlled the throttle / collective and tail rotor, so as to be able to raise and lower the helicopter in altitude while the Neural control system stabilized the helicopter about the roll and pitch axes.

The following videos are of the subsequent flight tests that we performed out in the country – the purpose was to continue to test the stability and performance of the Neural control system. Each video starts and ends with the Neural control system actively stabilizing the roll and pitch attitude.

Flight Test #1
Flight Test #2
Flight Test #3

Technical Background

The rest of this article goes into detail on the effort that was involved to make this happen – Neural controller design, avionics integration, problem resolution, etc.

Neural Network Control System

The original idea had been to demonstrate that a closed-loop Neural Network attitude control system could easily stabilize a helicopter (ergo a real-time flight control system) about the roll and pitch axes. It’s not an easy problem – try to hover an RC helicopter if you’ve never done it. Typically a “newbie” will tend to overcompensate on the joysticks and cannot maintain stability about the roll and pitch axes if his or her life depended on it. The best way to learn is on a computer simulator before trying the real thing – then the many crashes one will experience, during the learning process, aren’t a problem (better than destroying an actual RC helicopter).

Thus I had to break it down into a classical control problem – plant, feedback error, compensation signal, etc. And that’s before, of course, bringing hardware into the equation.

Control System Diagram

In the classic control diagram below, the system being controlled is the roll axis attitude error and roll rate of the helicopter (and this example applies to the pitch axis mode as well). The objective of the control system is to be able to quickly, and in a stable manner, zero out the roll attitude error and roll attitude rate.

The “plant” is the helicopter itself – specifically the helicopter dynamics. Starting on the left in the diagram below, a fixed commanded roll attitude value (could be positive or negative) is added to the actual roll attitude value to produce a “roll error” (the difference is the error). This “roll error”, along with the roll attitude rate, is input and propagated through a controller, which will attempt zero out both the roll error and the roll rate. In this case, the controller is a Neural Network, which then outputs a correction signal (compensation) – the servo actuator rate command, specifically a delta value. This is added to the current servo command value and is then sent to the actuator to update the servo position.

This action feeds into the helicopter dynamics as the main rotor dynamics are affected by the updated servo roll actuator motion (the main rotor disk rolls to the right or left). The resulting change in motion of the helicopter is measured by the attitude sensor. The sensor feeds this information back to the flight software – and the cycle is repeated at a rate of 50 milliseconds (20 Hz).

Classic Control Block Diagram
Neural Network / Hardware Diagram

A general schematic of the entire system, which shows details of the Neural controllers and the hardware, is shown below. A Crossbow (the company was acquired by Moog, Inc. in 2011) Attitude and Heading Reference System (AHRS) was mounted on the front of the helicopter – it provided stabilized roll and pitch data to the control system. Note that I used an RC tail rotor stability device to keep the tail steady – the main initial focus of the project was on the helicopter roll and pitch axes (one problem at a time). The measured roll and pitch attitude outputs were provided, at a rate of 20 Hz to two Neural Network modules – one performed roll control and the other performed pitch control. Each Neural Network then output the required correct servo motor step value given the attitude error and rate values.

It’s important to note that both Neural Controllers were identical – that is the Neural Network developed from roll test data was also used to control the pitch axis. Thus the Neural roll controller – that learned from the roll dynamics only, also easily handled the very different pitch axis dynamics. So while there were two Neural controllers – one for roll and one for pitch – they were identical.

Basic Avionics / Electronics Configuration

In the above diagram, the Crossbow AHRS is colored gold – this was a later model. The image was taken from their website (many years ago) for illustration purposes. The actual AHRS used in this effort was colored black as you’ll see in the hardware images further down in the article.

Flight Test Schematic

The field flight test schematic is shown below. The flight software – including the Neural Network controllers, was coded in C (using a Borland C compiler), running in Dos 6.22 on an HP laptop. The 90-foot tether, which connected the laptop with the helicopter, contained two RS-232 cable sets – one received the data from the AHRS (on-board the helicopter), and the other sent the updated servo commands to the helicopter.

In the laptop, an input text file contained the preset commanded roll and pitch attitude values for the Neural control system which would attempt to maintain the helicopter at these preset commanded attitudes. While the helicopter was in the air, if it started to drift to the left for example, I would incrementally increase the commanded roll, via arrow keys on the keyboard, until the helicopter stopped and maintained a reasonable hover (so that it didn’t drift all over the place). The flight computer would save the updated values for future flights. So typically after one flight, I didn’t have to update the commanded roll and pitch values as the natural hover points had already been established.

Helicopter System Flight Control Schematic

Development Laboratory – My Home Garage and Office

All of the development was performed in my home – specifically in my home office, my garage, and my backyard. All of the costs came out of my personal funds as well – the RC helicopter, the avionics equipment, laptops, tooling, etc. The most expensive single item was the Crossbow AHRS at just over $4,000 (remember that this was in 2001). Try convincing your spouse of the value of purchasing a small black box, for $4,000, that appears to do nothing and is not useful inside the house!! Nowadays these kinds of systems can be purchased for just a few hundred dollars.

Flight Test Stand

A flight test stand – upon which the helicopter would be mounted, was used in the development of the Neural controllers and was also used for preliminary testing. The image below shows the test stand with some explanations of the parts.

Helicopter Test Stand

The image below shows the helicopter during a particular test (airframe test only) on the test stand. The test stand served several purposes, included vibration testing of the AHRS, and for generating roll profile data for creating the Neural Networks that would be used as the roll and pitch controllers.

Helicopter Undergoing Testing on Aluminum Test Stand
Flight Computer – My HP Laptop

There were two laptops dedicated to the effort – one made by HP and the other made by Compaq (this was just before the time that the two companies merged). The HP laptop was used as the flight computer for controlling the helicopter (laptop on the ground communicating with the helicopter in the air via a 90 foot tether) while the Compaq laptop was used for bench testing avionics and other hardware components.

In the image below, the Compaq laptop is shown performing a test of the Crossbow AHRS unit.

Electronics / Avionics Test Bench Laptop
Avionics Integration – Phase-1

The image below shows the initial avionics set-up – yes it’s very primitive but when you’re doing something like this on your own – with your own funds, on your own time, and just need a prototype – it’s sufficient. And while I tried to be careful, I did make some mistakes – one time I miss-wired the power and ground leads and burned up a Pontech servo controller board – that was a bad day.

The image below shows the basic layout.

Electronics Layout

The image below shows a different perspective with the Crossbow AHRS unit attached. However, after doing a lot of testing with the helicopter on the test stand, I realized that the AHRS was going to need to be isolated from the airframe vibration. Needless to say, it can be unnerving having a helicopter’s main rotor disk spinning at around 1,700 rpm in your garage in close quarters (yes a 5-6 foot diameter main rotor disk spinning that fast can take your head off in a split second).

Electronics / Avionics Layout
Avionics Integration – Phase-2

In this phase, because of balancing issues, I decided to attempt to make the avionics integration more compact (move the weight closer to the main rotor shaft) – the revised layout is shown below. In addition, the AHRS was mounted inside a metal box with foam pads to provide the previously discussed required vibration isolation. Yes it looks like a mess but everything was tied down pretty solidly – I just needed it to work for field testing.

Updated Avionics Configuration
Helicopter Flight Test Configuration

Since I wasn’t an experienced RC helicopter pilot, for backyard flight testing I used “trainer landing gear”, so that if I had to take control of the helicopter near the ground, I wouldn’t overcompensate on the joysticks and flip the helicopter over on its side (a catastrophic situation since the main rotor blades would hit the ground while turning at 1,700 rpm) – likely the extended landing gear skids would “catch” the helicopter and give me a chance to recover and upright the aircraft.

Helicopter with Trainer Landing Gear

Neural Controller Design

What are Neural Networks?

The following is a definition from Wikipedia which I think is reasonable:

Artificial neural networks or connectionist systems are computing systems vaguely inspired by the biological neural networks that constitute animal brains. Such systems “learn” to perform tasks by considering examples, generally without being programmed with task-specific rules.

For this project, Feed-Forward Neural Networks were used – an example is shown below with two inputs, one middle (or sometimes called “hidden”) layer, and one output. The lines connecting the different nodes are called weights or gains. For example, the input to P1 (P = Processing element) is the sum of input X multiplied by w1 plus input Y multiplied by w2. Mathematically speaking, each processing element is a hyperbolic tangent function whose minimum and maximum values asymptotically approach -1 and +1 respectively.

The “learning” or “relationship mapping” is contained in the architecture of the processing elements and the interconnected weights / gains. The ability to learn nonlinear systems is derived from the nonlinear nature of the hyperbolic tangent processing elements.

Feed-Forward Neural Network
Why Use Neural Networks for Control?

Well – what is the purpose Artificial Intelligence?

The purpose is to each a “system” how to not only perform a task or series of tasks but to also create effective new solutions for circumstances for which it was not trained.

The automotive robotic systems shown below are very complex – however, they can only perform very specific tasks for which they are programmed.

Programmed Robotics
Programmed Robotics

The idea with Artificial Intelligence is just that – the system has some kind of intelligence that enables it to make decisions for situations beyond its training regime.

Intelligent Robotics

I’d already had a lot of experience applying Feed-Forward Neural Networks to a variety of simulation and image-recognition applications with amazing success – thus this application was just the next step. Neural Networks could be used to handle more complex helicopter control problems such as handling a sling-load, maintaining stability in very strong gusting / turbulent winds, etc.

Training the Neural Network from Transient Response Example

The basic concept was to use a transient response (rapidly decaying sinusoidal wave) as the “behavior to learn or emulate” to build an example training data set for the Neural Network. In other words, teach the Neural Network that it should quickly dampen out attitude error and drive attitude rate to zero in the process. Examples of various types of transient responses are shown below.

Transient Response Examples

The helicopter transient response data (relationship between the servo actuator command profile and the response of the main rotor disk which is measured by the AHRS unit) would be generated in the following manner:

1) Mount the helicopter (with avionics gear) on the test stand.
2) Bring the helicopter up to full power (just enough throttle / collective for takeoff).
3) Run a transient response curve through the roll servo (from the laptop computer) in order to get a decaying sinusoidal motion of the helicopter as shown in the image below. The AHRS unit would measure the roll profile which would be recorded by the laptop computer.

Sinusoidal Roll of Helicopter on Test Stand for Generating Training Data

Once the data is recorded, an area of the data is sectioned off for training and the data is curve-fit. The illustration below explains the objectives in setting up the training data.

Training Data Snapshot

The raw transient response roll profile – generated while the helicopter was on the test stand, is shown below.

Raw Helicopter Transient Response Data

The next step was to select the training data window, as shown below.

Windowed Training Data Region

The final step, before scaling for training, was to move the “target” roll value to meet the actual settled roll value and thus have a zero error condition at the end of the training set. As shown below, the error (in gray) was measured from a horizontal line just slightly above the zero attitude angle line.

Illustration of Roll Error and Roll Rate Training Data Curves
Training Algorithm

For this effort I used the 1998 Matlab Neural Network toolbox (running on a 32-bit Windows operating system) – specifically the Levenberg-Marquardt (LM) training algorithm for Feed-Forward Neural Networks. As background, the Levenberg-Marquardt optimization algorithm is used industry-wide to solve all types of optimization problems quickly. It is known for its ability to produce robust, optimal solutions much faster than other similar algorithms. To put it mildly, it blows the doors off of all other training algorithms for Feed-Forward Neural Networks.

These days I still use Matlab’s LM training algorithm – it’s now part of the “Deep Learning” toolbox. But in addition I’ve started using the Pyrenn Levenberg-Marquardt training algorithm as well. Here is the link to their site with downloadable Python and Matlab code – https://pyrenn.readthedocs.io/en/latest/. In fact, my next blog article will discuss an effort I did recently to compare performance of the two LM training algorithms.

Performance Shaping Technique

A special technique, that allows the user to adjust performance in real time (that not many people know about), is called “Performance Shaping”. It gives the user the ability to “dial in” various degrees of performance depending on the desired or changing performance requirements. This feature adds a measure of additional adaptability for changing conditions (especially for those for which the Neural Network was not trained).

The Performance-Shaping (hereon referred to as PS) capability is integrated by adding two converging lines that provide an envelope around the transient response. This tells that Neural Network that the PS values drive the required performance. Thus when the Neural Network is in active operations (post-training of course), it can be commanded to increase (tighten) or decrease (loosen) performance by changing the PS input values.

Fundamental Concept of Performance-Shaping

An example of how the PS technique was implemented in the helicopter controllers is shown below. Under normal conditions in this phase of testing, I never needed to adjust the PS parameters – but the idea was that down the road, in situations like gusting winds, the PS capability might be useful.

Diagram of Neural Networks with Integrated Performance-Shaping

I’d previously developed this technique on a simulation of the classic cart / inverted-pendulum simulation (used in academia to understand coupled, nonlinear control problems) – and it worked very well (amazing actually). An illustration of the classic cart / inverted-pendulum system is shown below.

Classic Cart with Inverted Pendulum Illustration

In the simulation, the PS Neural Network could be commanded to quickly upright the pendulum, and then walk the cart back to the original position (while keep the pendulum straight up). Or it could be commanded to do the opposite – get the cart quickly back to the original position while stabilizing the pendulum in the upright position. So performance was either emphasized for the cart (minimize the displacement error quickly) or for the pendulum (upright and stabilize the pendulum quickly).

The plots below are from the cart / inverted pendulum simulation – the performance curves for varying degrees of PS commands are shown. When the PS values were adjusted to command a highly damped transient response for the pendulum, the Neural Network did just that and walked the cart back to the origin more slowly. When the PS values were adjusted to commanded a highly damped response for the cart, the Neural Network quickly moved the cart back to the origin while taking its time stabilizing the pendulum.

Neural Network Performance-Shaping Modulates Performance between Cart and Pendulum
Problem Resolution

The reason I was stuck on not getting the Neural controller to work (per the story that starts at the stop) was that at the end of the transient response curves, the rate curve did not end at zero – instead there was a small amount of data that reached above zero. For some reason I didn’t notice this and it was on that Monday that I noticed that the rate curve didn’t end with a zero value but instead ended with some positive value (was not very obvious until I looked more closely at the data). So what this did was teach the Neural Network that the target attitude rate didn’t need to be zero but could be some number about zero. And thus the controller was very sloppy and loose (I’d noticed this on the test stand but assumed it was some interaction with the test stand).

So I fixed the problem by manually making the rate end at zero (yes I altered the data near the end to fit the desired condition). Then I built several new Neural Networks with the slightly updated training set and selected the best one based on test stand performance. The following morning is what you saw in the first video – the correction / update worked beautifully.

Hands-On Introduction and Tutorial for Setting up and Running NASA’s First-Class Java WorldWind Earth Model Simulation

What Is It?

What is WorldWind? Well let me quote NASA’s site directly:

WorldWind is an open source virtual globe API. WorldWind allows developers to quickly and easily create interactive visualizations of 3D globe, map and geographical information. Organizations around the world use WorldWind to monitor weather patterns, visualize cities and terrain, track vehicle movement, analyze geospatial data and educate humanity about the Earth.

There are three “flavors”:

1) Web WorldWind – to build Web applications in your browser.

2) WorldWind Android – to build Android applications.

3) WorldWind Java – to build standalone Java applications for Linux or Windows.

This article covers the application of WorldWind Java – but the classes are pretty much the same across all three application types. The specific site for the Java application side is here –https://worldwind.arc.nasa.gov/java/

Note that for this article I used an older version of WorldWind and a limited set of terrain files. So if you use the latest / greatest version of WorldWind along with a full set of terrain files, the model will look much better than what is seen in these videos.

Why I think it’s Very Cool

From my perspective it’s a high fidelity Earth model that has a tremendous amount of unique and exciting capabilities and features (trust me – this article doesn’t even scratch the surface – no pun intended – of what this system is capable of doing). The developer is only limited by his or her imagination.

Introduction Video

I would suggest that you watch the two short videos below (I recorded all the videos on my home Tower) – as they will give you a quick idea of the utility of using NASA’s WorldWind Earth modeling and simulation tool. Note that the video capture was performed at 10 frames-per-second so these two videos are not as smooth as they would be if I used a graphics video capture card. Click on the lower right square icon (next to the sound / speaker icon) to enlarge the video to almost the size of the monitor in order to more easily view it.

This first video is a general overview.

This second video focuses more on the aspects of watching a simulation unfold and changing observer perspectives by hand (manipulating the Earth model with the mouse).

If you’re intrigued as a software developer, then continue on with the article and I’ll explain this particular code architecture and how to set up the system (it’s really not complicated). Throughout the article are several more videos, each of which is an informal code walk-through for a specific class. And understand that there are many different ways to set up applications with WorldWind – this is just a simple, straight-forward approach for demonstration purposes.

At the end of the article are all of the files needed to run the project without making any code modifications – simply install the required software tools (Java JDK, NetBeans IDE) and the project and support files, and … run the project. There is also as section that explains how to download the pertinent files to just run the code as an executable (jar file) with supporting files (DLLs, library jar files, and terrain files).

High-Level Architecture and Software Layout

System Discussion

At a high level think of it like a physical game with game board and pieces. The Earth model is the game board and the model objects are equivalent to the game pieces that you place on the board. So when you first start out, you have to remove the game board from the box and lay it on the table – in the same way you launch the Earth model and prepare it for the model objects. Then you select the game pieces on the board – in the same way you build / select model objects and put them into the Earth model. That’s it. As a side note, no explicit design pattern was used for this project.

Thus there are two main elements to consider: 1) the Earth model and simulation engine, and 2) the user-designed objects that will be used as part of the Earth model and simulation. In this case, the model objects are trajectory objects (that traverse across the globe) and radar objects (stationary on the globe and track the trajectory objects).

The classes in this project are organized as follows:

Driver Class
This class sets the user preferences, builds the trajectory objects and radar objects, loads the preferences and objects into a data object, builds the Earth model (and passes the data object to it via the constructor) and launches the simulation.

Earth Model
This class sets up the Earth model, configures the selected layers, and drives the simulation.

Model Objects
There are two classes of models: 1) the TrajectoryObjects class, and 2) the RadarObjects class. The TrajectoryObjects class contains the properties and propagation algorithms for objects that move at or above the surface of the Earth model. The RadarObjects class contains the properties for stationary objects on the surface of the Earth model that track the moving objects.

Code Diagram

The block diagram is shown below. The driver class EarthProj does the following:

1) builds the trajectory and radar objects and sets their user-specified properties,

2) builds the ScenarioSettings object (this is the data object) and sets the user-specified parameters as well as loads the trajectory and radar object arrays,

3) builds the EarthView object, and

4) launches the demonstration simulation (via an EarthView class method).

Class Description

The following is a set of more detailed descriptions for each class.

Video Walk-Through of the Main

The following video is high-level and is a walk-through of driver class EarthProj and one of the methods of class EarthView. Basically it explains the user settings and the start-to-finish process for setting up and running a simulation.

Code Discussion

Class TrajectoryObjects

The following is an informal code walk-through of the TrajectoryObjects class. Click on the lower right square icon (next to the sound / speaker icon) to enlarge the video to almost the size of the monitor in order to more easily view it.


Class RadarObjects

The following is an informal code walk-through of the RadarObjects class. Click on the lower right square icon (next to the sound / speaker icon) to enlarge the video to almost the size of the monitor in order to more easily view it.


Class ScenarioSettings

The following is an informal code walk-through of the ScenarioSettings class. Click on the lower right square icon (next to the sound / speaker icon) to enlarge the video to almost the size of the monitor in order to more easily view it.


Class EarthView

The following is an informal code walk-through of the EarthView class. Click on the lower right square icon (next to the sound / speaker icon) to enlarge the video to almost the size of the monitor in order to more easily view it.

This next video is a demonstration of the JDesktop – which acts as a desktop upon which other JPanels can be mounted and moved around. This allows the developer to build a Swing setup that has flexibility in that the panels can be moved around during run-time.

Class BuildJPanel

The following is an informal code walk-through of the BuildJPanel class. Click on the lower right square icon (next to the sound / speaker icon) to enlarge the video to almost the size of the monitor in order to more easily view it.

Class CustomOrbitView – Resolving the Clipping Distance Issue

Depending on the observer’s location and the default clip distance settings, at times the view of the far side of a trajectory may be cut off as shown below.

Thus it’s important to give the user the ability to set the near and far clipping distance values. As shown below, the clipping effect can be eliminated.

The following is an informal code walk-through of the CustomOrbitView class. Click on the lower right square icon (next to the sound / speaker icon) to enlarge the video to almost the size of the monitor in order to more easily view it.

Run the Software “Out of the Package”

Instructions for Running the Project Executable (Fastest Setup Time)

If you would like to just run the executable on your desktop – without using the NetBeans IDE – to see the demo simulation, then you’ll need to download the earthdata.zip and simproject.zip files from the links below to your desktop. The assumption is that you’re running a Windows 64-bit operating system.

Once both files are on your desktop, do the following:

1) Unzip the simproject.zip file and then navigate down into the Earth_Proj directory as shown below:

Copy (or move) the Run_Time directory to your desktop and then go into it and it should look like this:

2) Unzip the earthdata.zip file on your desktop (it is over 700 MB in size so it will take 5-10 minutes depending on the speed of your computer) – then go into it and find the WorldWindData directory as show below.

Move the (cut and paste) the WorldWindData directory into the Run_Time directory on your desktop.

3) In the Run_Time directory, double-click on the EarhProj.jar file – as shown below – and the simulation will begin. If you have any problems, feel free to email me at my contact email address at the end of the article.

Instructions for Running Project in NetBeans (Slower Setup Time)

If you would like to run this project software “out of the box” but from the NetBeans IDE (in the event that you will want to start making your own code updates) then then you’ll need to download the earthdata.zip and simproject.zip files from the links below.

Keep in mind that that for this effort, I used NetBeans 8.1, which is a bit old. I have three versions of NetBeans on my Tower – the NetBeans Community’s NetBeans 8.1 and NetBeans 8.2, and the Apache NetBeans Community’s NetBeans 9.0. In this particular case, I meant to use 8.2 but started using 8.1 by accident – it doesn’t matter as the project will work fine in all three IDEs.

Once both files are on your desktop, do the following:

1) 1) Unzip the simproject.zip file and then navigate down to find the Earth_Proj directory as shown below:

Move the EarthProj_Tower directory to your desktop.

2) Unzip the earthdata.zip file on your desktop (it is over 700 MB in size so it will take 5-10 minutes depending on the speed of your computer) – then go into it and find the WorldWindData directory as show below.

Move the (cut and paste) the WorldWindData directory into the EarthProj_Tower directory on your desktop – the directory should look like this when you’re done:


3) If you don’t have the Java Development Kit 8 (JDK-1.8+) installed on your computer, then download it from the Oracle site and install it – the instructions follow below in the Software Tools Requirements section. If it is already installed then skip to the next step.

4) If you don’t have NetBeans IDE 8.1 installed on your computer, then download it from the NetBeans.org site and install it – the instructions follow below in the Software Tools Requirements section. If it is already installed then skip to the next step.

5) Start NetBeans 8.1 and navigate to open the “EarthProj_Tower” project on your desktop – allow the IDE to scan the project files and then click the green triangle (as shown below) and the simulation will begin.

If you want to use a more recent NetBeans IDE, then Apache NetBeans 9.0 will work fine – it is shown below, ready to run.

Source Code Documentation – Javadoc

The formal documentation for the project is contained in the codejavadoc.zip file below. To access the documentation, simply download this file (click on the link), and then unzip it – it will create a “javadoc” directory two levels down. Go into the javadoc directory and click on index.html or drag the index.html file into your preferred browser. If you click on the index.html file, it may open in Internet Explorer and it doesn’t work well in that browser – so if that’s the case then just drag the index.html file to your favorite browser (Brave, Firefox, Chrome, etc.) – either into the main window or into the URL address bar.

The following is an example of what you should see.

Software Tools Requirements

Software Tools

This project was put together with: 1) Java Development Kit (JDK) 1.8, and 2) NetBeans 8.1 IDE, running on a Windows 8.1 Operating System (OS). Keep in mind that this code (the Java source code, the WorldWind and JOG jar files and DLL files) could easily be assembled quickly into an IDE such as Intellij IDEA or Eclipse. This could also easily be run in Linux – the main differences are that in Windows, Dynamic Link Libraries – called DLLs, are used, whereas in Linux the equivalent would be Shared Objects – called SOs. Thus you’d need to get the JOG .SO files for Linux (I actually have them if you need them – just email me).

Java Development Kit – JDK 8

You can obtain the JDK 8 from Oracle’s site at https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html. Note that if you don’t have an account with Oracle then you’ll have to set one up before being able to download the JDK – it doesn’t cost anything (there’s no license fee) but you need to be registered (or you can’t download the installation package).

Assuming you’re running a Windows 64-bit operating system, you’ll want to download the package that’s highlighted in yellow as shown below.

NetBeans Integrated Development Environment (IDE) 8.1

The download link for NetBeans 8.1 is https://netbeans.org/downloads/8.1. I would suggest that you download the largest and most feature-filled package (circled below on the right). Note that if you don’t have a JDK installed, NetBeans will not continue its installation – so make sure that you install the JDK first.

Wrap Up

NASA WorldWind Code Base and Earth Model Data Files

Here are some useful WorldWind sites.

The direct link to NASA’s WorldWind site is https://worldwind.arc.nasa.gov/
The code base (Github repository) is here – https://github.com/NASAWorldWind/WorldWindJava.
The latest release can be obtained from here: https://github.com/NASAWorldWind/WorldWindJava/releases/tag/v2.1.0

Comments or Questions

If you have comments – then please make them here at the end of the blog article. If you have questions that you want to address to me directly, then feel free to email me at mikescodeprojects@protonmail.com.