User Tools

Site Tools


Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
playground:mission-log-template1 [2014/11/04 10:13] – [Local Spark-Cloud Server Dataflow] chronoplayground:mission-log-template1 [2015/07/15 13:59] (current) chrono
Line 1: Line 1:
-====== HOWTO: Set up and use a local Spark-Core Cloud ======+====== Fixing Kernel drivers for Huawei E3372s LTE/4G Stick======
  
-It took quite a bit of tinkering and a couple of clarifying sessions on IRC (Thanks to lbt and aholler for their input and support), to deploy the local Spark-Cloud test setup and interpret/abstract the scattered docs into one whole system view model. But why go through all this hassle, when you can just comfortably use the "official" spark.io cloud service to develop & manage your cores instantly?  +In order to mitigate the problem of having no internet at all, we got SIM-Card for LTE/At the same time the Telekom Speedstick V LTE (which is basically rebranded Huawei 3372sbecame available and not too expensiveA little research showed that it should be really easy to deploy this stick, when flashing Non-HiLink Firmware (<=21.xxx), when the Stick could be used just like old modemswith at commands but without the dependency to run pppd/wvdial like in the old modem or even 3G days. You just have to 
- +
-Well, the IoT (Internet of Things) is a hip buzzword these days and the [[mission:log:2014:09:06:reminder-gotta-get-more-spark-cores|Spark-Cores]] definitely can be categorized as the first generation of open, relatively cheap and hackable wireless IoT devices. +
- +
-For all we know, it is at least save to assume, that we actually have no way of knowing how far this technology branch is going to develop and spread in a couple of years, just like the Internet itself 20 years ago. We should look at the privacy aspects before it's actually too late to do so. In the end, it boils down to this question:  +
- +
-//Do we really want to give out our complete sensory data (sys/env/biometrics) over all time and possibly full remote control over all the actorsbuilt into everything, at all time, at the place we like to call our home?// +
- +
-In the year 2014, in post [[http://spectrum.ieee.org/telecom/security/the-real-story-of-stuxnet|Stuxnet]], [[http://www.heise.de/extras/timeline/|Snowden]] & [[http://www.zdnet.com/unsealed-docs-show-what-really-happened-with-lavabit-7000021489/|Lavabit]] era, we have no other choice but to come out of our state of denial and simply accept the fact, that every commercial entity can be compromised through multiple legal, administrative, monetary, social/personal or technological levers. Access- and Cloud-Provider are no exception. All of them can be tricked, coerced or forced to "assist" in one way or another. No matter what anyone promises, from this point on, they all have to be considered compromised. +
- +
-The current software implementation (firmware- and server-side) has no concept of mesh/p2p or direct networking/communication. All Spark-Cores need a centralized spark-server for Control & Communication. Also, the Spark.connect() routine unfortunately has no timeout (yet?), the Core might hang indefinitely, which could be big problem, even if your particular code doesn't require to connect to the cloud because after you call Spark.connect(), **your loop will not be called again until the Core finishes connecting to the Cloud**This might happen if your WiFI or internet access is offline. +
- +
-==== Remote Spark-Cloud (AWS) Dataflow ==== +
-{{:mission:log:2014:10:remote-spark-core-cloud-overview.png|}} +
- +
-In this picture the blue lines represent the data flow of the Cores, the clients and the central server. All points marked with a red C show where the current implementation/infrastructure is to be considered compromised and the yellow P marks potential security risks (since the firmware isn't compiled locally), theoretically anything can be injected into the firmware, either in the AWS cloud or even in-stream. Tests showed that the API webservers don't offer perfect forward secrecy, the cores itself use only 128-CBC without DH support, which offers no forward secrecy at all. Not having reliable crypto and passing everything through compromised infrastructure can'be the way to go. +
-==== Local Spark-Cloud Server Dataflow ==== +
-{{:mission:log:2014:10:local-spark-core-cloud-overview.png}} +
- +
-When you follow this howto and secure your network access with strong VPN you'll end up with something that looks like this image, where we effectively mitigate all these issues and take back control of our privacy & autonomy.  +
- +
-==== Key Features/Aspect Comparison ==== +
- +
-|:::^ Remote Spark-Cloud (AWS^ Local Spark-Cloud Server^ +
-^ Easy start with WebIDE | Yes | Manual setup required | +
-^ Offline Development with Atom and git | No (Use WebIDE) | Yes | +
-^ Number of exposable functions | 4 | unlimited((Since source code is openthis can be changed to whatever we like.)) | +
-^ Number of exposable variables | 10 | unlimited((Since source code is openthis can be changed to whatever we like.)) | +
-^ Local Cross-Compiler Toolchain needed | No | Yes | +
-^ Cores still work when Internet is offline | No((Remember: The Core stops in the connect() routine until connected to the cloud. Not one line of your code will be exceuted until that happens.)) | **Yes** | +
-^ Perfect Privacy/Forward Secrecy | No((See about weak SHA1 and cipher selection on [[https://www.ssllabs.com/ssltest/analyze.html?d=api.spark.io&hideResults=on|SSL Check for api.spark.io]])) | **Yes**((Depending how you set up your VPN.)) | +
-^ Core Firmware can be trusted | Potentially Not | **Yes** | +
-^ OTA Update capability | Yes (But potentially insecure) | **Yes** |  +
-^ Minimum Avg. Non-US Network Latency | >100ms | <10ms (LAN/WiFI) | +
- +
-===== ===== +
- +
-===== Installation ===== +
- +
-==== Dependencies ==== +
- +
-=== dfu-util === +
- +
-To claim, configure and flash our Cores locally we'll need dfu-util >= 0.7.0+
  
 <code> <code>
-emerge -av dfu-util+echo "AT^NDISDUP=1,1,"internet.eplus.de" > /dev/ttyUSB1 
 +$ udhcpc wwan0
 </code> </code>
-  
-In this guide I've used the 9999 gentoo ebuild, which pulls and builds 
-the current git master, on other distros your mileage may vary. 
  
-=== Node.js ===+and should be done with it. In reality this became quite a quest but with combined efforts we managed to get it to work and created a patch for current kernels (tested 3.18.[11-18]. If you also have this stick and are unable to get an IP via dhcp, this patch might be for you.
  
 <code> <code>
-emerge -av nodejs+make clean 
 +$ make target/linux/{clean,prepare} V=s QUILT=1 
 +$ cd build_dir/target-mips_34kc_musl-1.1.10/linux-ar71xx_generic/linux-3.18.18/ 
 +$ patch -p1 -i /path/to/patchfile/fix-cdc-ncm-huawei-3372s.patch
 </code> </code>
  
-Make sure the npm USE flag is set to get the node package manager as well+<sxh c> 
 +diff -u a/drivers/net/usb/cdc_mbim.c b/drivers/net/usb/cdc_mbim.c 
 +--- a/drivers/net/usb/cdc_mbim.c 2015-06-28 17:40:40.000000000 +0000 
 ++++ b/drivers/net/usb/cdc_mbim.c 2015-07-04 15:05:14.546901702 +0000 
 +@@ -158,7 +158,7 @@ 
 +  if (!cdc_ncm_comm_intf_is_mbim(intf->cur_altsetting)) 
 +  goto err;
  
-==== Cross Compiler Toolchain ====+- ret cdc_ncm_bind_common(dev, intf, data_altsetting); 
 ++ ret cdc_ncm_bind_common(dev, intf, data_altsetting, 0); 
 +  if (ret) 
 +  goto err;
  
-At this time is wasn't possible yet to use gentoo crossdev toolchain to compile +diff -u a/drivers/net/usb/cdc_ncm.c b/drivers/net/usb/cdc_ncm.c 
-the firmware since it seems to require newlib-nano instead of the plain newlib gentoo  +--- a/drivers/net/usb/cdc_ncm.c 2015-06-28 17:40:40.000000000 +0000 
-would like to mergeAholler ++++ b/drivers/net/usb/cdc_ncm.c 2015-07-09 08:43:01.658770535 +0000 
 +@@ -684,10 +684,11 @@ 
 +  ctx->tx_curr_skb = NULL; 
 +  }
  
-There wasn't enough time to hunt down this particular bug further so the  ++ kfree(ctx->delayed_ndp16); 
-[[https://launchpad.net/gcc-arm-embedded/+download|official toolchain]] was used instead.+  kfree(ctx); 
 + }
  
 +-int cdc_ncm_bind_common(struct usbnet *dev, struct usb_interface *intf, u8 data_altsetting)
 ++int cdc_ncm_bind_common(struct usbnet *dev, struct usb_interface *intf, u8 data_altsetting, int drvflags)
 + {
 +  const struct usb_cdc_union_desc *union_desc = NULL;
 +  struct cdc_ncm_ctx *ctx;
 +@@ -855,6 +856,17 @@
 +  /* finish setting up the device specific data */
 +  cdc_ncm_setup(dev);
  
-<code++ /* Device-specific flags */ 
-$ mkdir spark-core ++ ctx->drvflags = drvflags; 
-$ cd spark-core +
-</code>++ /* Allocate the delayed NDP if needed. */ 
 ++ if (ctx->drvflags & CDC_NCM_FLAG_NDP_TO_END) { 
 ++ ctx->delayed_ndp16 = kzalloc(ctx->max_ndp_size, GFP_KERNEL); 
 ++ if (!ctx->delayed_ndp16) 
 ++ goto error2; 
 ++ dev_info(&intf->dev, "NDP will be placed at end of frame for this device."); 
 ++ } 
 +
 +  /* override ethtool_ops */ 
 +  dev->net->ethtool_ops = &cdc_ncm_ethtool_ops; 
 +@@ -954,8 +966,11 @@ 
 +  if (cdc_ncm_select_altsetting(intf) != CDC_NCM_COMM_ALTSETTING_NCM) 
 +  return -ENODEV;
  
 +- /* The NCM data altsetting is fixed */
 +- ret = cdc_ncm_bind_common(dev, intf, CDC_NCM_DATA_ALTSETTING_NCM);
 ++ /* The NCM data altsetting is fixed, so we hard-coded it.
 ++ * Additionally, generic NCM devices are assumed to accept arbitrarily
 ++ * placed NDP.
 ++ */
 ++ ret = cdc_ncm_bind_common(dev, intf, CDC_NCM_DATA_ALTSETTING_NCM, 0);
  
-<code> +  /* 
-$ wget https://launchpad.net/gcc-arm-embedded/4.8/4.8-2014-q2-update/+download/gcc-arm-none-eabi-4_8-2014q2-20140609-linux.tar.bz2 +  * We should get an event when network connection is "connected" or 
-$ tar xvjpf gcc-arm-none-eabi-4_8-2014q2-20140609-linux.tar.bz2 +@@ -986,6 +1001,14 @@ 
-</code>+  struct usb_cdc_ncm_nth16 *nth16 = (void *)skb->data; 
 +  size_t ndpoffset = le16_to_cpu(nth16->wNdpIndex);
  
-Tell your system to pick it up automatically:++ /* If NDP should be moved to the end of the NCM package, we can't follow the 
 ++ * NTH16 header as we would normally do. NDP isn't written to the SKB yet, and 
 ++ * the wNdpIndex field in the header is actually not consistent with reality. It will be later. 
 ++ */ 
 ++ if (ctx->drvflags & CDC_NCM_FLAG_NDP_TO_END) 
 ++ if (ctx->delayed_ndp16->dwSignature == sign) 
 ++ return ctx->delayed_ndp16; 
 +
 +  /* follow the chain of NDPs, looking for a match */ 
 +  while (ndpoffset) { 
 +  ndp16 = (struct usb_cdc_ncm_ndp16 *)(skb->data + ndpoffset); 
 +@@ -995,7 +1018,8 @@ 
 +  }
  
-<code>$ export PATH=$PATH:$HOME/src/spark-core/gcc-arm-none-eabi-4_8-2014q2/bin</code>+  /* align new NDP */ 
 +- cdc_ncm_align_tail(skb, ctx->tx_ndp_modulus, 0, ctx->tx_max); 
 ++ if (!(ctx->drvflags & CDC_NCM_FLAG_NDP_TO_END)) 
 ++ cdc_ncm_align_tail(skb, ctx->tx_ndp_modulus, 0, ctx->tx_max);
  
-==== Core/Firmware ====+  /* verify that there is room for the NDP and the datagram (reserve) */ 
 +  if ((ctx->tx_max - skb->len - reserve) < ctx->max_ndp_size) 
 +@@ -1008,7 +1032,11 @@ 
 +  nth16->wNdpIndex cpu_to_le16(skb->len);
  
-<code> +  /* push a new empty NDP */ 
-$ git clone https://github.com/spark/core-firmware.git +- ndp16 = (struct usb_cdc_ncm_ndp16 *)memset(skb_put(skb, ctx->max_ndp_size), 0, ctx->max_ndp_size); 
-$ git clone https://github.com/spark/core-common-lib.git ++ if (!(ctx->drvflags & CDC_NCM_FLAG_NDP_TO_END)) 
-$ git clone https://github.com/spark/core-communication-lib.git ++ ndp16 = (struct usb_cdc_ncm_ndp16 *)memset(skb_put(skb, ctx->max_ndp_size), 0, ctx->max_ndp_size); 
-</code>++ else 
 ++ ndp16 = ctx->delayed_ndp16; 
 +
 +  ndp16->dwSignature = sign; 
 +  ndp16->wLength = cpu_to_le16(sizeof(struct usb_cdc_ncm_ndp16) + sizeof(struct usb_cdc_ncm_dpe16)); 
 +  return ndp16; 
 +@@ -1023,6 +1051,15 @@ 
 +  struct sk_buff *skb_out; 
 +  u16 n = 0, index, ndplen; 
 +  u8 ready2send = 0; 
 ++ u32 delayed_ndp_size; 
 +
 ++ /* When our NDP gets written in cdc_ncm_ndp(), then skb_out->len gets updated 
 ++ * accordingly. Otherwise, we should check here
 ++ */ 
 ++ if (ctx->drvflags & CDC_NCM_FLAG_NDP_TO_END) 
 ++ delayed_ndp_size = ctx->max_ndp_size; 
 ++ else 
 ++ delayed_ndp_size = 0;
  
-Make sure you checkout the three core repos into the same folderotherwise it won't build.+  /* if there is a remaining skb, it gets priority */ 
 +  if (skb != NULL) { 
 +@@ -1077,7 +1114,7 @@ 
 +  cdc_ncm_align_tail(skb_out,  ctx->tx_modulus, ctx->tx_remainder, ctx->tx_max);
  
-==== Spark Server ====+  /* check if we had enough room left for both NDP and frame */ 
 +- if (!ndp16 || skb_out->len + skb->len > ctx->tx_max) { 
 ++ if (!ndp16 || skb_out->len + skb->len + delayed_ndp_size > ctx->tx_max) { 
 +  if (n == 0) { 
 +  /* won't fit, MTU problem? */ 
 +  dev_kfree_skb_any(skb);
  
-<code> +@@ -1150,6 +1187,17 @@ 
-$ git clone https://github.com/spark/spark-server.git +  /* variables will be reset at next call *
-</code>+  }
  
-==== Spark CLI ====++ /* If requested, put NDP at end of frame. */ 
 ++ if (ctx->drvflags & CDC_NCM_FLAG_NDP_TO_END) { 
 ++ nth16 (struct usb_cdc_ncm_nth16 *)skb_out->data; 
 ++ cdc_ncm_align_tail(skb_out, ctx->tx_ndp_modulus, 0, ctx->tx_max); 
 ++ nth16->wNdpIndex cpu_to_le16(skb_out->len); 
 ++ memcpy(skb_put(skb_out, ctx->max_ndp_size), ctx->delayed_ndp16, ctx->max_ndp_size); 
 +
 ++ /* Zero out delayed NDP - signature checking will naturally fail. */ 
 ++ ndp16 memset(ctx->delayed_ndp16, 0, ctx->max_ndp_size); 
 ++ } 
 +
 +  /* If collected data size is less or equal ctx->min_tx_pkt 
 +  * bytes, we send buffers as it is. If we get more data, it 
 +  * would be more efficient for USB HS mobile device with DMA 
 +diff -u a/drivers/net/usb/huawei_cdc_ncm.c b/drivers/net/usb/huawei_cdc_ncm.c 
 +--- a/drivers/net/usb/huawei_cdc_ncm.c 2015-06-28 17:40:40.000000000 +0000 
 ++++ b/drivers/net/usb/huawei_cdc_ncm.c 2015-07-04 15:23:25.779014586 +0000 
 +@@ -73,11 +73,14 @@ 
 +  struct usb_driver *subdriver ERR_PTR(-ENODEV); 
 +  int ret -ENODEV; 
 +  struct huawei_cdc_ncm_state *drvstate (void *)&usbnet_dev->data; 
 ++ int drvflags 0;
  
-<code> +  /* altsetting should always be 1 for NCM devices - so we hard-coded 
-$ npm install -g spark-cli (need either root or sudo+-  * it here 
-</code>++ * it here. Some huawei devices will need the NDP part of the NCM package to 
 ++ * be at the end of the frame. 
 +  */ 
 +- ret = cdc_ncm_bind_common(usbnet_dev, intf, 1); 
 ++ drvflags |= CDC_NCM_FLAG_NDP_TO_END; 
 ++ ret = cdc_ncm_bind_common(usbnet_dev, intf, 1, drvflags); 
 +  if (ret) 
 +  goto err;
  
 +diff -u a/include/linux/usb/cdc_ncm.h b/include/linux/usb/cdc_ncm.h
 +--- a/include/linux/usb/cdc_ncm.h 2015-06-28 17:40:40.000000000 +0000
 ++++ b/include/linux/usb/cdc_ncm.h 2015-07-04 15:27:52.171388014 +0000
 +@@ -80,6 +80,9 @@
 + #define CDC_NCM_TIMER_INTERVAL_MIN 5UL
 + #define CDC_NCM_TIMER_INTERVAL_MAX (U32_MAX / NSEC_PER_USEC)
  
 ++/* Driver flags */
 ++#define CDC_NCM_FLAG_NDP_TO_END 0x02 /* NDP is placed at end of frame */
 ++
 + #define cdc_ncm_comm_intf_is_mbim(x)  ((x)->desc.bInterfaceSubClass == USB_CDC_SUBCLASS_MBIM && \
 +         (x)->desc.bInterfaceProtocol == USB_CDC_PROTO_NONE)
 + #define cdc_ncm_data_intf_is_mbim(x)  ((x)->desc.bInterfaceProtocol == USB_CDC_MBIM_PROTO_NTB)
 +@@ -103,9 +106,11 @@
  
 +  spinlock_t mtx;
 +  atomic_t stop;
 ++ int drvflags;
  
-===== Configuration & Usage =====+  u32 timer_interval; 
 +  u32 max_ndp_size; 
 ++ struct usb_cdc_ncm_ndp16 *delayed_ndp16;
  
 +  u32 tx_timer_pending;
 +  u32 tx_curr_frame_num;
 +@@ -133,7 +138,7 @@
 + };
  
-==== Local Spark cloud server ====+ u8 cdc_ncm_select_altsetting(struct usb_interface *intf); 
 +-int cdc_ncm_bind_common(struct usbnet *dev, struct usb_interface *intf, u8 data_altsetting); 
 ++int cdc_ncm_bind_common(struct usbnet *dev, struct usb_interface *intf, u8 data_altsetting, int drvflags); 
 + void cdc_ncm_unbind(struct usbnet *dev, struct usb_interface *intf); 
 + struct sk_buff *cdc_ncm_fill_tx_frame(struct usbnet *dev, struct sk_buff *skb, __le32 sign); 
 + int cdc_ncm_rx_verify_nth16(struct cdc_ncm_ctx *ctx, struct sk_buff *skb_in); 
 +</sxh>
  
-<code> 
-$ cd spark-server/js 
-$ npm install 
-$ node main.js 
  
-------- 
-No users exist, you should create some users! 
-------- 
-connect.multipart() will be removed in connect 3.0 
-visit https://github.com/senchalabs/connect/wiki/Connect-3.0 for alternatives 
-connect.limit() will be removed in connect 3.0 
-Starting server, listening on 8080 
-static class init! 
-core keys directory didn't exist, creating... /home/chrono/src/spark-core/spark-server/js/core_keys 
-Creating NEW server key 
-Loading server key from default_key.pem 
-set server key 
-server public key is:   
------BEGIN PUBLIC KEY----- 
-MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA2UuRHTMfftLH/w814i9x 
-5H3dFElK4J4Zp3SkF3Ere3pS/DVBJUG1MAirv6jfGP3knmWORR9hWEdjqJNGwAms 
-SESv5Ztt6zxlB4vbmzkK914bj/d8UuBjEFczfOh5YX65lQUdKm5arxjGza9I8tN+ 
-i+mdy/bNg3go9V1NflWNm0LQswwtBeVP0kdo/YOh8GJS7VZXyPIgDB/SrmgXfdvE 
-crAaE/0LiaI6/sOrMn7INirGJaHfkzSNs1yBZ0ZTci9HbFH0A/prwG5H5qmUVARa 
-5eQ2T/Zk8I+QuBzjaqTqNLn+rOmbTDWT0pQCdIHJMURcKRDE54ChZeAMw3Sft5jO 
-nQIDAQAB 
------END PUBLIC KEY----- 
  
-Your server IP address is: 192.168.1.100 
-server started { host: 'localhost', port: 5683 } 
-</code> 
  
-==== Spark-CLI Configuration ==== +{{tag>ethernet switch review hardware test energy}}
- +
-Spark-CLI usually wants to connect to the official Spark cloud, so we need  +
-to tell it where to connect instead. Since our spark-server told us it is +
-listening on **192.168.1.100** we just say: +
- +
-<code> +
-$ spark config local api-url http://192.168.1.100:8080 +
-$ spark config local +
-</code> +
- +
-This will create/update the spark-cli configuration found in ~/.spark/ reflecting +
-our wish to use the local cloud instance running on **192.168.1.100** instead of the public one. +
-At the point of writing these features are not yet documented but this   +
-[[https://github.com/spark/spark-cli/commit/26bbf530fe835943399b10c49e3e9194f482a3b1|commit]]   +
-worked here. +
- +
-==== Prepare a new Spark-Core ==== +
- +
-Let's leave the server running in this shell and open another terminal  +
-to configure the core. Now it's also a good time to connect your Micro-USB cable to the Spark-Core. +
- +
-=== Update factory firmware & CC3000 code === +
- +
-If this is a new Spark-Core, we want to run a special firmware program (deep-update), that will update the firmware running inside the CC3000 WiFi module. Put your Spark-Core in DFU-Mode by simultaneously pressing the MODE and RESET buttons, then releasing RESET while continuing to hold MODE for at least 3 seconds until the LED starts blinking (3-4Hz) yellow (red+green).  +
- +
-<code>$ spark flash --usb deep_update_2014_06</code> +
- +
-<code> +
-FOUND DFU DEVICE 1d50:607f +
-checking file  /usr/lib64/node_modules/spark-cli/binaries/deep_update_2014_06.bin +
-spawning dfu-util -d 1d50:607f -a 0 -i 0 -s 0x08005000:leave -D /usr/lib64/node_modules/spark-cli/binaries/deep_update_2014_06.bin +
-dfu-util 0.7 +
- +
-[...] +
- +
-Filter on vendor = 0x1d50 product = 0x607f +
-Opening DFU capable USB device... ID 1d50:607f +
-Run-time device DFU version 011a +
-Found DFU: [1d50:607f] devnum=0, cfg=1, intf=0, alt=0, name="@Internal Flash  /0x08000000/20*001Ka,108*001Kg" +
-Claiming USB DFU Interface... +
-Setting Alternate Setting #0 ... +
-Determining device status: state = dfuERROR, status = 10 +
-dfuERROR, clearing status +
-Determining device status: state = dfuIDLE, status = 0 +
-dfuIDLE, continuing +
-DFU mode device DFU version 011a +
-Device returned transfer size 1024 +
-No valid DFU suffix signature +
-Warning: File has no DFU suffix +
-DfuSe interface name: "Internal Flash  " +
-Downloading to address = 0x08005000, size = 93636 +
-............................................................................................ +
-File downloaded successfully +
-Transitioning to dfuMANIFEST state +
-Error during download get_status +
-Flashed! +
-</code> +
- +
-Don't worry about the "Error during download get_status", apparently this is "normal" and doesn't mean the flash process failed. When the LED is blinking (3-4Hz) green, put it into DFU-Mode again and flash a regular firmware like Tinker to get connected and developing again. +
- +
-<code>$ spark flash --usb tinker</code> +
- +
-=== ID your Core === +
- +
-Put your Spark-Core into Listening-Mode by pressing and holding the MODE button until the LED is blinking (2Hz) blue. Now run the following command to get your Core's ID: +
- +
-<code>$ spark identify</code>  +
-<code>Your core id is: 1234567890abcdef</code> +
- +
-Note/copy this ID, you'll need it soon again. +
- +
-=== Upload local Spark-Cloud's pub-key to Spark-Core === +
- +
-Now that we've got the Spark-CLI ready, we can supply our Spark-Core with the public key +
-of our freshly deployed spark-server. This actually is a good practice against DNS hijacking or other  +
-man-in-the-middle type attacks because our Spark-Cores can now determine that they are  +
-communicating with our server and not someone else's. It would be good to have some of the more stack/sec oriented people looking deeper into the [[https://github.com/spark/spark-protocol/blob/master/js/lib/ICrypto.js|code]]. +
- +
-<code> +
-$ cd spark-core/spark-server/js +
-$ ls -al default* +
-</code> +
- +
-<code> +
--rw-r--r-- 1 chrono users 1679 Sep 25 11:27 default_key.pem +
--rw-r--r-- 1 chrono users  451 Sep 25 11:27 default_key.pub.pem +
-</code> +
- +
-Put the Spark-Core into DFU mode again and then execute: +
- +
-<code>$ spark keys server default_key.pub.pem 192.168.1.100</code> +
- +
-<code> +
-Creating DER format file +
-running openssl rsa -in  default_key.pub.pem -pubin -pubout -outform DER -out default_key.pub.der +
-checking file  default_key.pub192_168_1_100.der +
-spawning dfu-util -d 1d50:607f -a 1 -i 0 -s 0x00001000 -D default_key.pub192_168_1_100.der +
-dfu-util 0.7 +
- +
-[...] +
- +
-Filter on vendor = 0x1d50 product = 0x607f +
-Opening DFU capable USB device... ID 1d50:607f +
-Run-time device DFU version 011a +
-Found DFU: [1d50:607f] devnum=0, cfg=1, intf=0, alt=1, name="@SPI Flash : SST25x/0x00000000/512*04Kg" +
-Claiming USB DFU Interface... +
-Setting Alternate Setting #1 ... +
-Determining device status: state = dfuERROR, status = 10 +
-dfuERROR, clearing status +
-Determining device status: state = dfuIDLE, status = 0 +
-dfuIDLE, continuing +
-DFU mode device DFU version 011a +
-Device returned transfer size 1024 +
-No valid DFU suffix signature +
-Warning: File has no DFU suffix +
-DfuSe interface name: "SPI Flash : SST25x" +
-Downloading to address = 0x00001000, size = 1024 +
-+
-File downloaded successfully +
-Okay!  New keys in place, your core will not restart. +
-</code> +
- +
-=== Create new keys for your core === +
- +
-Put the Spark-Core into DFU mode again and then execute: +
- +
-<code> +
-$ mkdir core_keys +
-$ cd core_keys +
-$ spark keys doctor 1234567890abcdef +
-</code> +
- +
-<code> +
-FOUND DFU DEVICE 1d50:607f +
-running openssl genrsa -out 1234567890abcdef_new.pem 1024 +
-running openssl rsa -in 1234567890abcdef_new.pem -pubout -out 1234567890abcdef_new.pub.pem +
-running openssl rsa -in 1234567890abcdef_new.pem -outform DER -out 1234567890abcdef_new.der +
-New Key Created! +
-FOUND DFU DEVICE 1d50:607f +
-FOUND DFU DEVICE 1d50:607f +
-running dfu-util -d 1d50:607f -a 1 -s 0x00002000:1024 -U pre_1234567890abcdef_new.der +
-running openssl rsa -in pre_1234567890abcdef_new.der -inform DER -pubout -out pre_1234567890abcdef_new.pub.pem +
-Saved! +
-checking file  1234567890abcdef_new.der +
-spawning dfu-util -d 1d50:607f -a 1 -i 0 -s 0x00002000:leave -D 1234567890abcdef_new.der +
-dfu-util 0.7 +
- +
-Copyright 2005-2008 Weston Schmidt, Harald Welte and OpenMoko Inc. +
-Copyright 2010-2012 Tormod Volden and Stefan Schmidt +
-This program is Free Software and has ABSOLUTELY NO WARRANTY +
-Please report bugs to dfu-util@lists.gnumonks.org +
- +
-Filter on vendor = 0x1d50 product = 0x607f +
-Opening DFU capable USB device... ID 1d50:607f +
-Run-time device DFU version 011a +
-Found DFU: [1d50:607f] devnum=0, cfg=1, intf=0, alt=1, name="@SPI Flash : SST25x/0x00000000/512*04Kg" +
-Claiming USB DFU Interface... +
-Setting Alternate Setting #1 ... +
-Determining device status: state = dfuUPLOAD-IDLE, status = 0 +
-aborting previous incomplete transfer +
-Determining device status: state = dfuIDLE, status = 0 +
-dfuIDLE, continuing +
-DFU mode device DFU version 011a +
-Device returned transfer size 1024 +
-No valid DFU suffix signature +
-Warning: File has no DFU suffix +
-DfuSe interface name: "SPI Flash : SST25x" +
-Downloading to address = 0x00002000, size = 610 +
-+
-File downloaded successfully +
-Transitioning to dfuMANIFEST state +
-Error during download get_status +
-Saved! +
-attempting to add a new public key for core 1234567890abcdef +
-submitting public key succeeded! +
-Okay!  New keys in place, your core should restart. +
-</code> +
- +
- +
-==== Claim the Spark-Core ==== +
- +
-Press RESET on the core and/or hold the MODE button until the +
-core goes back to listening mode (blinking blue). In order to  +
-create a new user and claim your first core we simply run: +
- +
-<code>$ spark setup</code> +
- +
-<code>  +
-If you run this the first time, you'll have to create an +
-account in your local cloud first: +
- +
-======================================== +
-Setup your account +
- +
-Could I please have an email address?  myemail@domain.net +
-and a password?  ******* +
- +
-Trying to login... +
-Login failed, Lets create a new account! +
-confirm password  ******* +
-creating user:  myemail@domain.net +
-user creation succeeded! +
-Got an access token! 7yxc7yxc7yxc7yxc7yxc7yxc7yxc7yxc7yxc +
-Logged in!  Saving access token: 7yxc7yxc7yxc7yxc7yxc7yxc7yxc7yxc7yxc +
-Using the setting "access_token" instead  +
- +
-Now that we have created a user and are logged in (got an access token) +
-we can claim our core: +
- +
----------------------- +
-Finding your core id +
- +
-Your core id is: 1234567890abcdef +
- +
-======================================== +
-Setup your wifi +
- +
-SSID: mySSID +
-Security 0=unsecured, 1=WEP, 2=WPA, 3=WPA2: 3    +
-Wifi Password: myWifiPSK +
-Attempting to configure wifi on /dev/ttyACM0 +
-I said: w +
-Serial said: SSID:  +
-I said: mySSID +
- +
-Serial said: mySSID +
-Security 0=unsecured, 1=WEP, 2=WPA, 3=WPA2:  +
-I said: 3 +
- +
-Serial said: 3 +
-Password:  +
-I said: myWifiPSK +
- +
-Serial said: myWifiPSK +
-Thanks! Wait about 7 seconds while I save those credentials... +
- +
-Awesome. Now we'll connect! +
- +
-If you see a pulsing cyan light, your Spark Core +
-has connected to the Cloud and is ready to go! +
- +
-If your LED flashes red or you encounter any other problems, +
-visit https://www.spark.io/support to debug. +
- +
-    Spark <3 you! +
- +
-Done!  Your core should now restart. +
- +
-Please wait until your core is breathing cyan and then press ENTER +
-</code> +
- +
-The Led should now slowly pulsate in cyan (blue+green) and your +
-spark-server should show something in the console like: +
- +
-<code> +
- Connection from: 192.168.1.123, connId: 1 +
-on ready { coreID: '123456789abcdef', +
-  ip: '192.168.1.123', +
-  product_id: 0, +
-  firmware_version: 10, +
-  cache_key: '_0'+
-Core online! +
-</code> +
- +
-=== Check/List cores === +
- +
-<code> +
-$ spark list +
-</code> +
- +
-<code> +
-Checking with the cloud... +
-Retrieving cores... (this might take a few seconds) +
-tinkertest (123456789abcdef) is online +
-  Functions: +
-    int digitalread(String args)  +
-    int digitalwrite(String args)  +
-    int analogread(String args)  +
-    int analogwrite(String args)  +
-</code> +
- +
- +
-==== Compile firmware ==== +
- +
-<code> +
-$ cd core-firmware/build +
-</code> +
- +
- +
-<code> +
-$ make clean all +
-</code> +
- +
-<code> +
-Building core-common-lib +
-make[1]: Nothing to be done for `all'+
- +
-Building core-communication-lib +
-make[1]: Nothing to be done for `all'+
- +
-Building target: core-firmware.elf +
-Invoking: ARM GCC C++ Linker +
-mkdir -p ./ +
-arm-none-eabi-g++ -g3 -gdwarf-2 -Os -mcpu=cortex-m3 -mthumb  -I../inc -I../libraries/Serial2 -I../../core-common-lib/CMSIS/Include -I../../core-common-lib/CMSIS/Device/ST/STM32F10x/Include -I../../core-common-lib/STM32F10x_StdPeriph_Driver/inc -I../../core-common-lib/STM32_USB-FS-Device_Driver/inc -I../../core-common-lib/CC3000_Host_Driver -I../../core-common-lib/SPARK_Firmware_Driver/inc -I../../core-common-lib/SPARK_Services/inc -I../../core-communication-lib/lib/tropicssl/include -I../../core-communication-lib/src -I. -ffunction-sections -Wall -fmessage-length=0 -Werror=deprecated-declarations -MD -MP -MF core-firmware.elf.d -DUSE_STDPERIPH_DRIVER -DSTM32F10X_MD -DDFU_BUILD_ENABLE -DSPARK=1 -DRELEASE_BUILD  ./obj/src/spark_wiring_random.o ./obj/src/spark_wiring_tcpserver.o ./obj/src/application.o ./obj/src/spark_wiring_i2c.o ./obj/src/spark_wiring_usartserial.o ./obj/src/spark_wiring_interrupts.o ./obj/src/spark_wiring_usbserial.o ./obj/src/spark_wiring_ipaddress.o ./obj/src/usb_endp.o ./obj/src/usb_prop.o ./obj/src/spark_wiring_stream.o ./obj/src/spark_wiring.o ./obj/src/spark_wiring_print.o ./obj/src/stm32_it.o ./obj/src/spark_wiring_string.o ./obj/src/main.o ./obj/src/spark_wiring_wifi.o ./obj/src/spark_wiring_eeprom.o ./obj/src/spark_wiring_tone.o ./obj/src/usb_desc.o ./obj/src/newlib_stubs.o ./obj/src/wifi_credentials_reader.o ./obj/src/spark_wiring_udp.o ./obj/src/spark_wiring_time.o ./obj/src/spark_utilities.o ./obj/src/spark_wiring_tcpclient.o ./obj/src/spark_wlan.o ./obj/src/usb_istr.o ./obj/src/spark_wiring_spi.o ./obj/src/spark_wiring_servo.o ./obj/src/startup/startup_stm32f10x_md.o --output core-firmware.elf -T../linker/linker_stm32f10x_md_dfu.ld -nostartfiles -Xlinker --gc-sections  -L../../core-common-lib/build -lcore-common-lib -L../../core-communication-lib/build -lcore-communication-lib -Wl,-Map,core-firmware.map --specs=nano.specs -lc -lnosys -u _printf_float +
- +
-Invoking: ARM GNU Create Flash Image +
-arm-none-eabi-objcopy -O binary core-firmware.elf core-firmware.bin +
- +
-Invoking: ARM GNU Create Flash Image +
-arm-none-eabi-objcopy -O ihex core-firmware.elf core-firmware.hex +
- +
-Invoking: ARM GNU Print Size +
-arm-none-eabi-size --format=berkeley core-firmware.elf +
-   text    data     bss     dec     hex filename +
-  78672    1224   11864   91760   16670 core-firmware.elf +
-</code> +
- +
-==== OTA firmware update ==== +
- +
-<code> +
-$ spark flash 1234567890abcdef tinker +
-</code> +
- +
- +
-<code> +
-Including: +
-/usr/lib64/node_modules/spark-cli/binaries/spark_tinker.bin +
-attempting to flash firmware to your core 1234567890abcdef +
-flash core said  {"id":"1234567890abcdef","status":"Update started"+
-</code> +
- +
-<code> +
-set_core_attributes { coreID: '1234567890abcdef', +
-  userID: '1a2b3c4d5e6f7A8B9C0D'+
-FlashCore { coreID: '1234567890abcdef', +
-  userID: '1a2b3c4d5e6f7A8B9C0D+QQyn'+
-flash core started! - sending api event { coreID: '1234567890abcdef'+
-192.168.1.100 - - [Sun, 05 Oct 2014 09:34:18 GMT] "PUT /v1/devices/1234567890abcdef?access_token=060085e419be45ea4c4c1b7f09a443c2806a7fb3 HTTP/1.1" 200 68 "-" "-" +
-on response, no chunk, transfer done! +
-releasing flash ownership  { coreID: '1234567890abcdef'+
-flash core finished! - sending api event { coreID: '1234567890abcdef'+
-Connection from: 192.168.1.123, connId: 185 +
-on ready { coreID: '1234567890abcdef', +
-  ip: '192.168.1.123', +
-  product_id: 0, +
-  firmware_version: 10, +
-  cache_key: '_184'+
-Core online! +
-</code> +
- +
-==== Caveats ==== +
- +
-  * Compiling through the local cloud is on the road map but doesn't work yet +
- +
-{{tag>tag1 tag2 }} +
- +
-{{keywords>Apollo-NG apollo next generation hackerspace hacker space research development makerspace fablab diy community open-resource open resource mobile hackbus tag1 tag2}}+
  
 ~~DISCUSSION~~ ~~DISCUSSION~~
- 
- 
-When you are done with your draft, enter the following URL schema, create the page (with current date and your title) and copy the content into it: 
- 
-https://apollo.open-resource.org/mission:log:2013:09:24:title-of-mission-log