User Tools

Site Tools


Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Last revisionBoth sides next revision
mission:log:2014:11:04:howto-setup-use-and-secure-a-local-spark-cloud-server [2014/11/04 18:29] – [HOWTO: Set up and secure a local Spark-Core Cloud] chronomission:log:2014:11:04:howto-setup-use-and-secure-a-local-spark-cloud-server [2016/03/01 10:28] – [HOWTO: Set up and secure a local Spark-Core Cloud] chrono
Line 8: Line 8:
  
 //Do we really want to give out our complete sensory data (sys/env/biometrics) over all time and possibly full remote control over all the actors, built into everything, at all time, at the place we like to call our home?// //Do we really want to give out our complete sensory data (sys/env/biometrics) over all time and possibly full remote control over all the actors, built into everything, at all time, at the place we like to call our home?//
 +
 +Some people may haven't yet realized that we've got plenty of open-source tools to store, analyze, link and visualize billions of data rows quickly and with much ease. Imagine what people with a multi-billion budget are able to employ. To give you a small scale example, how transparent anyone's little life and habits become, I've created a [[https://apollo.open-resource.org/flight-control/vfcc/#/dashboard/db/aquarius-hab-environment-indoor|dashboard]] which doesn't show many metrics yet (more are in the process) but it's more than enough, if you learn how to interpret the graphs. The data you see there is mostly generated by two spark-cores which are deployed here. Big/Open-Data/Cloud technology is not the problem itself, it's our culture/society, which obviously isn't ready for it.
  
 In the year 2014, in a post [[http://spectrum.ieee.org/telecom/security/the-real-story-of-stuxnet|Stuxnet]], [[http://www.heise.de/extras/timeline/|Snowden]] & [[http://www.zdnet.com/unsealed-docs-show-what-really-happened-with-lavabit-7000021489/|Lavabit]] era, we have no other choice but to come out of our state of denial and simply accept the fact, that every commercial entity can be compromised through multiple legal, administrative, monetary, social/personal or technological levers. Access- and Cloud-Provider are no exception. All of them can be tricked, coerced or forced to "assist" in one way or another. No matter what anyone promises, from this point on, they all have to be considered compromised. In the year 2014, in a post [[http://spectrum.ieee.org/telecom/security/the-real-story-of-stuxnet|Stuxnet]], [[http://www.heise.de/extras/timeline/|Snowden]] & [[http://www.zdnet.com/unsealed-docs-show-what-really-happened-with-lavabit-7000021489/|Lavabit]] era, we have no other choice but to come out of our state of denial and simply accept the fact, that every commercial entity can be compromised through multiple legal, administrative, monetary, social/personal or technological levers. Access- and Cloud-Provider are no exception. All of them can be tricked, coerced or forced to "assist" in one way or another. No matter what anyone promises, from this point on, they all have to be considered compromised.
Line 15: Line 17:
 {{:mission:log:2014:10:remote-spark-core-cloud-overview.png|}} {{:mission:log:2014:10:remote-spark-core-cloud-overview.png|}}
  
-In this picture the blue lines represent the data flow of the Cores, the clients and the central server. All points marked with a red C show where the current implementation/infrastructure is to be considered compromised and the yellow P marks potential security risks (since the firmware isn't compiled locally), theoretically anything can be injected into the firmware, either in the AWS cloud or even in-stream. Tests showed that the API webservers don't offer perfect forward secrecy, the cores itself use only 128-CBC without DH support, which offers no forward secrecy at all. Not having reliable crypto and passing everything through compromised infrastructure can't be the way to go. Not to mention, the additional amount of required bandwidth this concept ultimately creates, when you consider the avaiable IPv6 address space and a fair likelihood, that not so far off, there will be more IoT clients connected to the Internet, than there are humans. +In this picture the blue lines represent the data flow of the Cores, the clients and the central server. All points marked with a red C show where the current implementation/infrastructure is to be considered compromised and the yellow P marks potential security risks (since the firmware isn't compiled locally), theoretically anything can be injected into the firmware, either in the AWS cloud or even in-stream. Tests showed that the API webservers don't offer perfect forward secrecy, the cores itself use only 128-CBC without DH support, which offers no forward secrecy at all. Not having reliable crypto and passing everything through compromised infrastructure can't be the way to go. Not to mention, the additional amount of required bandwidth this concept ultimately creates, when you consider the available IPv6 address space and a fair likelihood, that not so far off, there will be more IoT clients connected to the Internet, than there are humans. 
  
 ===== ===== ===== =====
Line 22: Line 24:
 {{:mission:log:2014:10:local-spark-core-cloud-overview.png}} {{:mission:log:2014:10:local-spark-core-cloud-overview.png}}
  
-When you follow this howto and secure your network access with a strong VPN you'll end up with something that looks like this image, where we effectively mitigate all these issues and take back control of our privacy & autonomy. +When you follow this howto and secure your network access with a strong VPN ([[https://airvpn.org/|AirVPN]] used here) you'll end up with something that looks like this image, where we effectively mitigate all these issues and take back control of our privacy & autonomy. At least we now can decide if and which data we want to share and publish.
  
 ==== Key Features/Aspect Comparison ==== ==== Key Features/Aspect Comparison ====
Line 65: Line 67:
 At this time is wasn't possible yet to use a gentoo crossdev toolchain to compile At this time is wasn't possible yet to use a gentoo crossdev toolchain to compile
 the firmware since it seems to require newlib-nano instead of the plain newlib gentoo  the firmware since it seems to require newlib-nano instead of the plain newlib gentoo 
-would like to merge. Aholler  +would like to merge. There wasn't enough time to hunt down this particular bug further so the 
- +
-There wasn't enough time to hunt down this particular bug further so the +
 [[https://launchpad.net/gcc-arm-embedded/+download|official toolchain]] was used instead. [[https://launchpad.net/gcc-arm-embedded/+download|official toolchain]] was used instead.
  
Line 432: Line 432:
  
 ==== Compile firmware ==== ==== Compile firmware ====
 +
 +=== Default Firmware ===
  
 <code> <code>
Line 466: Line 468:
 </code> </code>
  
 +=== How do I create my own firmware? ===
 +
 +Basically you can just go into core-firmware/applications and create a new folder with your project there. The Makefile looks for application.cpp in this folder. However, I found this a bit cumbersome and didn't want my code to be in the firmware's git repo, so I decided to put my spark projects into my main spark-core folder. Since I use atom as an editor I find it convenient to directly build the firmware from within the editor's UI without having to switch into a terminal window to build. This just calls a build shell script, which links the code into the core-firmware/applications folder, sets some env vars and calls the build with make APP=$projectname parameter. Automatic OTA update is also just a oneliner more in the same script. An example of this hackish construct is available under: 
 +https://github.com/apollo-ng/spark-lighter
 ==== OTA firmware update ==== ==== OTA firmware update ====
  
-It's possible to update the cores via Wifi, also called OTA (Over the Air) update. This is really nice feature, since we don't have to run around and deploy new firmware locally via USB but we can just push new firmware to any ID we designate.+It's possible to update the cores via Wifi, also called OTA (Over the Air) update. This is really lovely feature, since we don't have to run around and deploy new firmware locally via USB but can just push new firmware to any ID we designate. I have this oneliner directly in my build script, so when I build directly in atom editor, it compiles the firmware and automatically pushes an OTA-Update to the designated core. If there were more core's with the same firmware, it should be easy enough to auto-update more than one.
  
 <code> <code>
Line 505: Line 511:
   * Compiling through the local cloud is on the road map but doesn't work yet   * Compiling through the local cloud is on the road map but doesn't work yet
  
-{{tag>spark-core embedded arduino security software hardware IoT crypto}}+==== Final Notes ==== 
 + 
 +Depending on your state of mind, you might perceive this as paranoid but I can guarantee you, this has nothing to do with paranoia in any way, neither should this be perceived as a rant against spark-core or Amazon Web Services for that matter. Amazon Web Services is just the cloud provider used by spark.io and therefore got mentioned because it is so. What applies here applies to any other cloud platform one could choose, in general. From a business standpoint of view the decision to put things into a AWS seems absolutely valid to me. Of course, it's a little more expensive when you crunch the numbers but in return you get the full orchestra of AWS products, which in my experience do a good job working together, route53, elb, multiple geolocations and the whole shabang. And you can react very quickly to changes in demand of requests. In a perfect world, I would just use it as it is, because the setup isn't bad when we consider bandwidth not a problem. But when government agencies run haywire and military/intelligence/media war- and fearmongering go completely out of hand, as it obviously has during the last 12 years and no one really does a thing about it, the only logical place left to seek change is in oneself. Do it yourself then :) I am happy, grateful and amazed that now everybody can get these devices to tinker, create and learn. Hopefully, some of these experiments and examples will help someone else to save some time, to get their brains wrapped around the concepts of this one more quickly. 
 + 
 +{{tag>spark-core embedded arduino security software hardware IoT crypto vpn}}
  
-{{keywords>Apollo-NG apollo next generation hackerspace hacker space research development makerspace fablab diy community open-resource open resource mobile hackbus spark-core embedded arduino security software hardware IoT crypto}}+{{keywords>Apollo-NG apollo next generation hackerspace hacker space research development makerspace fablab diy community open-resource open resource mobile hackbus spark-core embedded arduino security software hardware IoT crypto vpn}}
  
 ~~DISCUSSION~~ ~~DISCUSSION~~