Dataware for data-driven transformation

Green Data Part 2: IoT Hardware

Contributed by

9 min read

This blog post is part of a 5-part series that explores the use of IoT and Data Science to maximize solar energy by leveraging the MapR Data Platform. Read my first blog post here for an introduction to this topic.

Being reasonably certain that I could determine the optimal angle, once pumped with enough caffeine, I decided the next step was to implement an IoT project, using a Raspberry Pi that would allow me to both determine the angle of the array as well as control the 24-volt actuator motor that controls the angle. In order to set the array to a specific angle, I would need to KNOW where the array was.

This led me to a somewhat simple IoT Raspberry Pi project. Simple yes, but working at MapR, I have learned so much about the world of IoT and Edge computingthat I was excited to try my hand on a project that included the following pieces:

The result was the ability to add in the components I needed to the original housing, giving me the ability to run both the original tracking hardware and the Raspberry Pi for data gathering at the same time. While I could run both at the same time, only one component could "control" the motor. This allowed me to allow the optical sensor to control the array and gather data about the orientation of the array with the Raspberry Pi.

Throughout this blog series, I've taken the advice of Dr. Ted Dunning. Other times, I've ignored it. For the good of the readers, I will include that as well, as his advice is almost always well earned. In the hardware category, he pointed out that my range extenders did not have electrical isolation, making the system sensitive to electrical fluctuations and lightning. Due to the location, and the observation of few lightning strikes in the area, I am running it without said isolation; however, Ted's advice is something everyone should be aware of, so I include it here.

Here is the main control box (open):

  • The 24 volt power supply (left component)
  • The original tracking hardware (top component on the right)
  • The I2C extender module (middle component on the right)
  • The Raspberry Pi with I2C and motor control hat (bottom right)

Here is the sensor on the array plane:

  • The I2C extender is on the left.
  • The 6 axis Gyro/Accelerometer is on the right.
  • The wire is a 4 wire, 22 AWG, outdoor cable.
  • The enclosure is weather tight.

Here is a shot from the back of the north array, so you can see the control box (closed in this shot) as well as the sensor location on the panels.

Prior to working with the data in general for the purpose of the solar project, I needed to get my head around how the sensor even provided data to me. How did numbers from a Python program correspond to physical measurements in the real world? The script in the Solar Pi repo is how I did this. Essentially, every 5 seconds, I told it to log to a JSON file the following information:

   "ts": "2018-11-15 09:07:38",
   "epochts": 1542294458,
   "array": "NORTHPI",
   "accel_x": 1338,
   "accel_y": 668,
   "accel_z": -1819,
   "gyro_x": -28,
   "gyro_y": -14,
   "gyro_z": -3,
   "alt": 17.71685293467566,
   "az": 141.73480959945215
  • ts and epochts is the timestamp of the reading in both local time as a string and epoch time.
  • The array is WHICH array is logging the data (I use the hostname of the Raspberry Pi here).
  • Accel x, y, z are the x, y, and z axis readings from the accelerometer.
  • Gyro x, y, z are the x, y, and z axis readings from the gyroscope.
  • alt and az are the sun altitude and sun azimuth, returned from the Pysolar module at the time of the reading.

A single record as shown above doesn't really help me understand how the numbers translate into the real world; hence, step one was to attach the sensors and start collecting data.

Gyroscope Readings

I first looked at the gyro readings. (At this point, I purchased the sensor with both a gyroscope and an accelerometer because I wasn't sure what was best for my application. Not the most scientific approach, but I like working with data, and it would help me understand how these two sensors apply to the physical world.

The gyroscope data was disappointing. It did not provide a location as much as provided evidence that something was moving (as far as I can tell). The spikes in the gyro_y (orange) correlate to when the optical tracker was moving the array.

I then focused on the accelerometer info.

This proved to be the data I was looking for. While I am not an expert on sensors, looking at the data, and knowing where the array was physically pointed, allowed me understand the readings.

  • accel_y (orange) never changes, no matter where the array is, so I can ignore this.
  • accel_z (green) does change, but only when the array is getting close to flat. If you note at the beginning (when the array was pointed east) and then at the end (when the array was pointed west), the accel_z values are actually the same. Thus, for determining where the angle of the array is, this is not helpful, so I ignored it.
  • accel_x (blue) gave me the value that corresponded to the rotational position of the array. Thus, this was the value I would focus on for calculating and setting the array angle.

At the end of the day, it was again Dr. Ted Dunning who pointed out how an accelerometer works and how I should be using the equation here to get the most accurate angle.

atan2(accel_x, accel_z)

However, at this point, there are going to be other, less controllable factors that play into these exact angles. For example:

  • How do I get an accurate angle in degree rating from my sensors?
  • How perfectly level are my two foundations on the arrays?
  • How perfectly aligned are my arrays to the south?
  • How accurately is the sensor orientated and mounted in the enclosure?
  • How accurately mounted is the sensor to the frame of the array?

There could be many potential errors in exact angle calculators, and at the end of the day, I am hoping to get a reasonable placement of the arrays that won't be fooled by clouds, snow, and frost. If I can get an equation/program that puts it "darn close all of the time," instead of "probably close some of the time" and "never close some of the time," then I will call my experimentation a success. I didn't want "perfect" to be the enemy of "better than what I currently have."

Thus, for the purposes here, I settled on using the accel_x value as a function of my rotation angle (-41 degrees to 41 degrees, more on that later) and just did a linear equation to determine where I was on that scale. As Ted points out, it's not perfectly accurate from a math perspective, but for my purposes it'll do, and as you will see in the next blog post, it is providing the same or better results then just letting the array track optically.

This blog post was published December 12, 2018.

50,000+ of the smartest have already joined!

Stay ahead of the bleeding edge...get the best of Big Data in your inbox.

Get our latest posts in your inbox

Subscribe Now