Training Challenges in the North

Iqaluit, Nunavut.

In February.

When I was first asked if I would be available to provide SMART Notebook training to teachers in Iqaluit, my main concern was that I did not have the gear to handle Canada's far north in the middle of winter. Sure, I had a parka, some gloves, and boots. That isn't uncommon for Canadians.

But there's a pretty big difference between winter in southern Ontario and northern Canada.

As it turns out, the weather wasn't nearly the challenge I thought it would be (even though my flight out did get cancelled due to a blizzard). I picked up some better boots, better mittens, a balaclava, and some snow pants, and ended up walking around quite a bit while in Iqaluit. It was a great experience, and I only fell through the snow once!

The real challenge of Iqaluit, from an educational technology training perspective, is the state of the Internet.

The Internet speed at the hotel would lead me to click on a web link, walk away to do something else, and come back to the computer a couple of minutes later. The speed at the school wasn't any better. In fact, the school Internet was further impacted by the government filters. I have to wonder how long it will take for officials to realize that the filters are increasingly ineffective, especially as students begin to bring their own data-enabled devices into the classroom. The filters also end up blocking useful teaching tools and valuable information (some of the SMART-related resources appeared to be blocked). SMART Response worked, but not particularly well and would not be usable for more than a handful of questions. To SMART's credit, the question web pages are actually quite small. Unfortunately, the school's Internet connection is so slow, the question pages would still take up to a minute to load on student devices. There is another delay between the student clicking to submit a response, and the response being "received" by the teacher.

Surprisingly, SMART Maestro, the iPad-enabled feature of Notebook, ran smoothly. This must mean that most or all of the network traffic required to mirror the SMART Board to the iPad must stay on the local network.



On my third day of training, I asked the teachers what their strategies were for integrating Internet-based materials into the classroom. In unison, several teachers replied, "We don't". This may seem like a shocking response in the 21st century, but it isn't a surprise once you've tried using the Internet in the school for a few days.

So, the solution could be to pre-download resources from home. The teachers did comment that their Internet speed at home was quite a bit better than the Internet at the school. This was a solution used to a limited degree by some teachers, but there was another problem. It seems that the best deal for Internet in Iqaluit only includes roughly 40GB of monthly data, and each additional GB is $15! Ouch! I can barely stay below my 275GB monthly allotment and have considered paying the additional $10/month to get unlimited bandwidth. That's great for me, but there is clearly a problem with "Internet equity" in Canada.

The CRTC is currently soliciting input on broadband connectivity in Canada. The completed questionnaires must be submitted by February 29, 2016, so go participate as soon as possible (but please just read a little further first).

Before you respond to that poll, just take a few moments. Forget about Netflix. Forget about iTunes. Think about your own child not having access to the Internet to research a school subject. Consider that other students across the country have relatively easy access to resources like Homework Help, Khan Academy, and a variety of other online learning resources. Many school districts are moving to Google Apps or Office 365, tools that help enable collaboration and 21st century skills. From what I experienced in Iqaluit, these tools would be virtually unusable.

Apple, FBI, ISIS, and Secrets

This goes significantly off-topic from what I normally talk about, but still revolves around technology (and even touches on the potential impact on education). The news about the FBI demanding that Apple unlock the phone of one of the San Bernardino gunmen is everywhere, and the FBI using the suffering of the victims' families to get what they want is not only immoral, it is irresponsible.

One of the most common arguments that comes up regarding encryption and secrets is that if you aren't doing anything wrong, you have nothing to hide. This could not be further from the truth. Many businesses around the world depend on trade secrets, or keeping secret the development and progress of new products and technology. Law enforcement agencies may be protecting the identities of undercover agents, witnesses, or victims. You know, agencies like, say, the FBI. Can you say you have nothing to hide while still demanding answers about the breaches in security at Target, Neiman Marcus, and Michaels? More in line with education, schools and districts must also be sure they are keeping student information secure and private. This is not just something that should be done, but something that must be done. We all have "something to hide", even if we're not doing anything wrong.

The FBI is claiming they hope to discover information on the phone; information that will help prevent other terror attacks. This is highly unlikely, and the FBI knows it. The San Bernardino gunmen were a man and a woman. Islamic extremists (ISIS, Taliban) do not use women as "soldiers". This act of terror appears to have been "ISIS-inpired", but that is very different from "ISIS-plotted". The FBI can get access to phone records, even without access to the phone. They likely already have a good idea who the gunmen were in contact with, and there is little else they could discover from the phone itself.

Asking Apple to try to create a method to circumvent security measures puts far more people at risk than any possible gain from unlocking this one phone.

There is a belief that the burden on, or cost to, Apple to circumvent the security of the phone is relatively small because they are such a large and wealthy company. Again, this could not be further from the truth. If Apple is successful in gaining access to the phone, it calls into question, at least from the perspective of the public, the actual security of Apple's products. Apple could potentially lose contracts for large-scale deployments to government agencies, businesses, and yes, even school districts. The public perception of ineffective security could also cost Apple consumer sales. There are so many costs that go beyond the simple costs related to the hours required for Apple's developers to gain access to the phone's contents.

There isn't anything the FBI can do to bring back the victims of the attack, and it is disturbing that they are using the grief of the victims' families to advance some hidden and unrelated agenda.

Waiting on the next big thing

After recording the podcast following FETC this year, our group pondered why we didn't really see any major new technology.

I suggested that it might be related to the difficulties the major processor fabrication companies are having shrinking the chips used in our electronics. I quickly realized that this was a topic that my colleagues really had little knowledge of, and that most users of technology probably don't know much about the chips inside the gadgets we use every day.

This post is not intended to be an in-depth technical discussion. Hopefully I can provide a simple explanation of how our electronics have managed to get faster and do more things over the years, and give a quick overview of what is causing a slowdown in some areas of technology.

In 2006 Intel introduced the Core architecture of processors. These processors were manufactured on what Intel referred to as a 65nm (a nanometer is one-billionth of a meter) process. The 65nm process had also been used in the later Pentium 4 processors. 65nm represents a measure of the process, but some "features" in the process are larger than 65nm while others can be smaller.

Late in 2007, Intel began producing processors on a 45nm process. While some might interpret this as being roughly 70% of 65nm, processors are generally rectangular and have area. This means that the 45nm process can create an identical chip in roughly 48% of the space used by the 65nm process (45^2 / 65^2 = 47.92...). The scaling isn't quite perfect, so the chips don't shrink by the same amount as the process naming implies. Still, you can see that chip manufacturers can pack a whole lot more transistors into the same amount of space used by the older process. Reduced size is not the only advantage to new, smaller processes; smaller processes use less power and generate less heat. The reduced size also normally means that a chip as complex as "last year's" high-end chip can be produced at a lower cost.

In early 2010, just over two years after introducing the 45nm process, Intel released chips produced on the 32nm process (roughly 50% in size compared to 45nm). In mid-2012, Intel had started using a 22nm process (roughly 47% in size compared to 32nm). The first sign of trouble was with chips from Intel being produced at 14nm (40% of 22nm). Intel released a very limited number of 14nm chips, targeted mainly at low power laptops. Higher powered 14nm desktop and laptop chips did not show up until 2015. Intel's roadmap also now shows that products based on their next process (10nm) is not due until late 2017.

Intel is not the only chip-making company around. Other big players include TSMC and Samsung. Despite the public disputes between Apple and Samsung, the processors in most iPhones have actually been manufactured by Samsung. The latest iPhones have started using chips manufactured by TSMC. Samsung and TSMC have also started to struggle to make chips smaller. Some rumours suggested that with the iPhone 6 (and 6 Plus), Apple was taking so much (of a limited) capacity from TSMC that other tech companies could not get access to the latest process. AMD and Nvidia are the two major graphics chip designers, and have their graphics chips manufactured primarily by TSMC. Neither company released graphics chips using TSMC's 20nm process.

Limiting the latest and greatest manufacturing technologies to a handful of companies means that only those companies have the potential to make noticeable improvements, but they may not be under pressure to do so. Apple seems to have capitalized on their nearly exclusive access to TSMC's advanced process. Benchmarks for the iPhone 6, and again with the 6S models, showed significant improvements in performance. Note that the iPhone is under competitive pressure from Android smartphones. Intel on the other hand faces little competition in their primary market of computer processors. Intel not only designs the processors, but also owns the manufacturing facilities for their processors. The performance improvements in processors from Intel have been relatively small (5-10% from generation to generation).

What about technology other than smartphones and computer processors?

We are starting to hear more about VR (virtual reality) and AR (augmented reality). Oculus, probably the most recognizable name in VR, announced the system requirements for the Rift VR headset. The cost of building a system to meet those requirements is quite high. Here is a quote from that page, highlighting the importance of the GPU (Graphics Processing Unit).
Today, that system’s specification is largely driven by the requirements of VR graphics. To start with, VR lets you see graphics like never before. Good stereo VR with positional tracking directly drives your perceptual system in a way that a flat monitor can’t. As a consequence, rendering techniques and quality matter more than ever before, as things that are imperceivable on a traditional monitor suddenly make all the difference when experienced in VR. Therefore, VR increases the value of GPU performance.
Remember that AMD and Nvidia are the major source of graphics chips, and that they likely did not get access to 20nm? Relatively few computers meet the graphics requirements of the Rift.

Other areas of technology may also have been stalled by limited access to the newest chip manufacturing processes. Nvidia makes the chips in the tablet for Google's Project Tango, a computer-vision platform for detecting objects (think self-driving cars). This technology is relevant for robotics, a topic I discussed in the podcast.

While the trend toward a slowing in technological advancement continues, more companies are finally getting access to the latest manufacturing processes. AMD and Nvidia are planning products based upon 14nm and 16nm for release in 2016. AMD has stated that their upcoming graphics chips will make the largest leap in performance per watt in the history of the Radeon brand (AMD's primary graphics brand, introduced in 2000).

Hopefully this means we will see some new and really interesting tech at conferences next year.

Reflections on FETC 2016

This was my fourth trip to Orlando to attend FETC, and there were some notable differences from previous years. Our group was significantly larger than in previous years, and included faculty, staff, masters students, a PhD student, and representatives from companies that work closely with us. We wrapped up FETC with a brief podcast. I will expand on my comments in that recording, and talk about a some other things I noticed at FETC 2016.

When talking about the conference itself, the layout and size were noticeably different. The exhibit hall stretched from north to south, with the keynote area at the "back" of the convention center. The exhibit area was definitely smaller than it had been in previous years, but still large enough to keep attendees busy exploring booths.

As noted by my colleagues in the podcast, there wasn't much that was particularly revolutionary or innovative to be found at FETC. This seems to be a reflection of the market in general. We all seem to be waiting for the next "big thing".

While not exactly new, this seemed to be the year of the robot and maker spaces. I was particularly intrigued by Ozobot. I believe this is a great way to introduce young children to basic coding skills. The Ozobot will follow a path drawn out by magic markers, and simple instructions can be given to the Ozobot by simply alternating the colours drawn along the path. While a great implementation, I believe there are two challenges. First, what is the next step after the Ozobot? Once a child has mastered the instructions and "played", the Ozobot itself cannot go beyond its very basic programming. Second, the price tag of $50 USD is quite steep for such a simple robot that likely won't see much classroom time. A class set of 18 is $1000, which is not really a deal at all. Some extras are thrown in, but you give up the value of 2 Ozobots to get the extras. If the Ozobot was $20 USD, with a 25-unit bundle (with extras) at $500, I would be more excited.

Sessions and conversations around maker spaces almost always include, or even focus on, the topic of 3D printing. There were a few booths showcasing 3D printers, but it is interesting that none were from the "big players" (Epson, Canon, HP, etc). It does lead to concern about acquiring a device from a company that might not be around next year.

One "throw back" at FETC was typing instruction. There were several booths focusing on teaching typing skills. I have been told that this is a response to poor results in online tests where students that know the content are still doing poorly because they cannot type quickly enough to finish on time. I imagine these skills are also valuable for collaborative work on Google Docs or Office 365.

I have still been considering the question about what I hope or expect to see in the future for educational technology. Other recent events, including CES, showcased quite a bit in the VR/AR (virtual reality/augmented reality) space. I only saw a little of this at FETC. I know the system requirements for Oculus Rift are fairly demanding, and it is also very expensive. If that was the only option, I would understand why it didn't make an appearance at FETC, but Google Cardboard seems a reasonable choice for VR in the classroom. Hopefully we see more immersive and interactive uses of Cardboard soon.

Remote Student Participation

On Wednesday we learned that one of our students would need to participate in classes remotely. Starting Monday.

Of course the first suggestion volunteered to me was, "Can't we just Skype the student in?" Our classes are not standard university undergraduate lectures. Our instructors are typically modelling the K-12 classroom. They move around quite a bit, and the students participate in small group activities. Skype running on a stationary device was not going to work.

I had a pretty good idea that what I really wanted was a VGo, but there was no way we were getting the funds for that. Even if, by some miracle, we managed to convince "the powers" to buy a VGo, it was virtually impossible that the convincing, purchasing, delivery, and setup would happen before Monday morning.

A couple of years ago I discovered Swivl at an Ed Tech conference (I honestly can't remember which one). I encouraged our Instructional Resource Centre to purchase a couple of them for use by our students for their micro-teaching videos. The students record themselves delivering a lesson activity, and then review it to evaluate and adjust their teaching methods. The students would often setup cameras on tripods, or ask another student to do the recording. Neither method was ideal. A tripod did not allow the student to move around, and audio was troublesome in both scenarios.

With Swivl, the "teacher" wears a wireless tracker (with integrated microphone), and the Swivl base turns and pivots to follow the tracker. The recording device (typically a smartphone or tablet) sits on the base. A single, short audio cable connects the base to the device to record the audio from the mic integrated into the tracker. It really is impressive in its simplicity, and works quite well.

The problem is that Swivl's primary use and design is around recording lesson activities, not video conferencing. The Swivl base connects to the recording device using a male-to-male, 4-segment 3.5mm cable. This is a fairly standard plug found in pretty much every smartphone and tablet. It carries both the mic-in and audio out. Unfortunately, this cable runs directly from the Swivl base to the device, with no splitter or plug in the base for the audio out.

Our initial tests using Lifesize Video (the standard video conferencing solution used by our university) and an iPad confirmed that audio was being recorded from the mic in the tracker, but no audio would play back unless the base from the cable was unplugged from the iPad.

We decided to try a 3.5mm 4-segment to 2 x 3.5mm 3-segment splitter.

Adapter to break out the mic in and audio out connections
We actually had to use two of these adapters. One was used to convert the 4-segment mic out from the Swivl base to a standard 3-segment mic line. The second was connected to the iPad allowing us to plug in the mic from the Swivl base, and a set of external speakers.


Swivl video conference cart
Our Swivl telepresence setup

With everything plugged in, we started a Lifesize Video session and everything worked! The final bit was putting everything on a cart that could be easily moved between classes, taping together some of the cabling (to try to prevent instructors/students from unplugging cables from splitters), zip-tying some of the cables to tidy it up, and labeling plugs that couldn't easily be taped in place ("to iPad").

It would be nice to have the cart completely wireless, but we settled on a single power cord. The Swivl has a 4-hour battery life (estimate), and the student has back-to-back classes that total 5 hours. We also didn't have battery-powered speakers.

It would also be better if the remote student could control the direction of the Swivl rather than relying on the tracker, especially during the small group sessions. This is a feature of Swivl Cloud Live. Swivl Cloud Live is in beta, and I did submit the form to sign up. I see more experimenting in the next couple of weeks.

Friday morning we conducted a test session with the student and all went well. The first class is Monday morning. Fingers crossed.

Deploying Shared iPads the New-Old Way

After spending a couple of weeks just getting more and more frustrated at the mess Apple has made with Configurator 2 (AC2) and Profile Manager (PM), I discovered a way to use Configurator 2 along with Configurator 1.7 (AC1) to get the results I am actually after.

In AC1, it was relatively easy to wipe devices by restoring from a backup. The iPads would get wiped, apps pushed back out over USB, and the devices renamed. AC1 had no trouble remembering whatever name had been previously assigned to the iPad, and re-applying that name during the restore process. The renaming part is one of the areas where AC2 seems to fail. It's like it forgets device names during the restore. This is very bad when you want the device name to match the uniquely numbered name printed on the device itself.

The biggest problem with the continued use of AC1 exclusively for the deployment of the iPads is that it does not support the ability to skip all of the "Welcome steps" of iOS9+ (the Setup Assistant), and there are a lot of steps now. Following a restore, it is necessary to manually skip through the steps of not setting a passcode, region, location services, and more. You have to do this on every iPad, so it is not realistic to continue using AC1 exclusively for managing the iPads.

Aside from the renaming problem in AC2, there seems to be a major problem with app deployment. First, the new app deployment mechanism in AC2 does not use the old method of downloading apps via iTunes and importing the .app file; it requires you to use "Managed Distribution" for all of your apps whether they are paid or free. If you want to deploy apps over USB during the restore process, you have to give AC2 the VPP Apple ID. When I did this, it reported that it was going to revoke the authority of PM to manage distribution of apps! I am assuming that this also means I would only be able to use this single computer to manage our apps, which is not possible across multiple sites.

The final piece that AC2 seems to break is that all of our iPads end up in the wrong timezone (and thus show the wrong time). I think restoring from a backup may deal with this, but I'm not sure.

It seems to be lose-lose, but it's not.

Note: For the following method, you must still have AC1 installed, or be able to install it. I'm not sure if you can still download it, but we still had it installed on our computer used to manage the iPads.

To start, make sure that all of the iPads you want to manage are prepared and supervised by AC1. Assign all device names using AC1. Once complete, quit AC1 and run AC2. In AC2, use the Migration tool to import all of the information from AC1. Once complete, you will be able to manage the iPads using both AC1 and AC2! If you add more devices later, you will need to add the new devices in AC1 and use the Migration tool again, but otherwise you just need to go through this process once.

When you need to wipe a shared device (or devices), follow these steps.

Run AC2 and connect all of the devices. Apply a blueprint that Prepares the iPads and applies profiles. The Prepare options are where you can disable the various iOS Welcom Screen (Setup Assistant) options. It will warn you that the connected devices are going to be erased. Just tell it to go ahead.

Once the process completes (takes about an hour for us), quit AC2 and run AC1. Select the iPads you have just restored and click Refresh. AC1 will push out any apps and profiles that are supposed to be on the devices, and it will rename the devices with the previously defined names!

Unplug the iPads and you will find that the Welcome steps have been skipped (well, those that can be skipped), your apps have been restored, and AC1 even sets the right timezone. If you've installed the PM management profile, you can even use it to push new profiles and apps remotely from PM later.

As far as I can tell, this is the best way to manage a collection of shared iPads that need to be regularly wiped. I would love to hear from others if they have found a better way.

Deploying iPads the new way

Oddly enough, the thing I struggled with the most for this entry was the title. Here were some that went through my head at various stages of deploying (or preparing to deploy) a new batch of shared iPad Minis this past week.
  • Apple Configurator 2 Challenges
  • Apple Configurator 2 and Profile Manager Challenges
  • Why does DEP need to exist?
  • iPads for Schools: Only if You're 1:1
  • Apple Hates You
Over the last few years, we have been downloading codes for use with Configurator 1.x, and happily deploying to various iPad carts from separate computers across three sites. Certain options in Configurator even made it relatively easy to wipe and restore the iPads when they were returned from our pre-service Teacher Education students, something that I imagine is important in any iPad deployment where the iPads are shared.

In addition to Configurator, we have been using Meraki for some management and deployment. Our most recent acquisition of a new batch of iPads for use in the program pushed us over the 100 device limit for using Meraki for free. We started looking at the various MDM options, and the cost quickly added up. This is where Profile Manager comes in. This is also where dependency madness began.

Profile Manager and Configurator 2 lead to updates being required for virtually everything else. OS X had to be upgraded to El Capitan on the computer running Configurator. OS X Server had to be upgraded on our Mac Pro, which in turn also required El Capitan.

So, with everything ready, I started the deployment process. Well, actually, several different deployment processes trying to figure out just how to adequately manage over 100 shared iPads.

Now, the iPads are kept in carts, and they are numbered. The new iPads have numbers inscribed on them. The old iPads have labels affixed. Well, Configurator lets you automatically number iPads during the Prepare process, so great, right? Sort of. Here are the options in Configurator 2.


  1. Plug in all of the iPads and let Configurator 2 name them, randomly assigning numbers that do not actually correspond to the numbers on the iPads.
  2. Plug in all of the iPads and assign them all the same name in the Prepare process. Next, unplug all of them. Finally, plug them back in one at a time and manually name them.
  3. Plug them in and Prepare them one at a time, manually assigning the name.
In other words, all the options suck, and it gets worse.

When wiping and restoring the iPads to ensure no personal photos or data are on them (remember, these are shared iPads), Configurator 2 completely forgets which iPad had which name! Are you kidding me, Apple?! I tried several methods to restore hoping that the name would be retained, but it was all in vain. I ended up giving up and assigning the same name to all the iPads, knowing full well what the repercussions would be.

With Profile Manager configured, I downloaded the management and trust profiles, and started the Prepare process. Of course, a few of the iPads had issues during this process and didn't finish completely. No problem, right? Oh, wait. The iPads do not have unique names that correspond to the iPad numbers! Now I have to pull the iPads out of the cart in search of the problematic ones! 

The next step was deploying apps. Our paid apps will have to wait, because Configurator 2 does not support the spreadsheet method anymore. We can convert all of our old licenses, but this has to be coordinated across multiple locations and departments (the reason we purchased downloadable codes with separate spreadsheets to begin with). This is also where Profile Manager comes in. I began pushing the apps (50 of them) out to the iPads. The iPads are all connected to the same WiFi network, and based on the progress, it seems like it's going to be a multi-day process. The best part? It would need to do this every time we need to wipe the iPads! They're shared devices, so we need to wipe them regularly. Oh, and an iPad can fail during this process as well, which means manually trying to figure out which iPad is the problem.

OK. So, use Configurator 2 to push the apps out, right? When I tried to setup our VPP account on Configurator 2, I am told that it will remove the management from Profile Manager, so I lose the remote management capability! Is this for real?!

OK, OK. I'll make a backup of an iPad with the apps already installed, then restore that backup to the rest of the iPads! Nope. Restoring the backup to a test iPad not only lacked any of the installed apps, it complained about failing to install the management profile! ARGH!!!

I need to circle back around a little, because an on-going issue with management profiles is Apple's DEP (Device Enrolment Program). The management profiles installed to the iPads can be removed by any user, without a password. The only way around this is to enroll in DEP, and only devices purchased within a given timeframe can be added to DEP (forget about the collection of iPad 2's we purchased several years ago). How does this make any sense?! How is it not possible for Apple to simply allow management profiles to be password protected?! This is absolutely insane!


I can only hope that I have missed some critical step somewhere. I have Googled and pretty much found nothing but complaints about being "forced into Configurator 2". I suspect the problems I have described are not currently "solvable".


It comes as no surprise to me that Chromebooks are gaining in popularity for education. iPad deployment, especially for shared devices, is a nightmare.