Mainstreaming VR

Two of the major players in the world of VR tech are the HTC Vive and the Oculus Rift. These two VR headsets may take slightly different approaches to creating immersive VR, but they both approach the 3D aspect of it in very similar ways, and have nearly identical system requirements. The system requirements themselves are quite steep, especially when it comes to the chip in the PC used to process graphics.

In the last month, the two major graphics chip makers have released new graphics cards for PCs that help make VR slightly more affordable. At the end of June, AMD released the $200 (US) Radeon RX 480 graphics card, and in mid-July Nvidia released the $249 (US) Geforce GTX 1060.

What makes these cards so important to VR is the price. Prior to the launch of these cards, the cheapest video card that met the requirements of the Vive and Rift would typically have cost the consumer well over $300. Reducing the "cost of entry" to any technology by $50 to $200 is a great step. In March, one site priced out a system that met the minimum system requirements for VR, and it totaled $939. The video card used in that build was $309, and there are now Radeon RX480 cards priced at $199. That drops the total PC build price by nearly 12%!

Recent rumours point to the Nvidia GTX 1060 making its way into laptops, mostly unchanged from the chip in the desktop video cards. In the past, Nvidia has launched "M" (mobile) versions of their graphics chips that were significantly different from their desktop parts. Laptops based on Nvidia's past x60 graphics chips could often be found for sale in the $1000 to $1200 range. The prices on laptops using Nvidia's previous generation of chips (the GTX 960) can be found quite a bit lower right now. That is a good sign that laptops using the new generation chips will likely be available soon. If the rumours about the mobile GTX 1060 are true, it is possible that fully VR capable laptops may be available for under $1200 in the very near future.

Unfortunately, one of the biggest barriers to VR headsets being mainstream is the cost of the headsets themselves. The Oculus Rift is $600 (US), while the Vive is an even heftier $800. When considering this as part of the total VR system, a price drop of $200 won't help get VR into the mainstream. That's OK for now. Perhaps by the time the headsets see a significant price drop, there will actually be apps, content, and games available that make it worth using VR.

Or maybe VR is the next 3D TV; a lot of hype that just fizzles out.

The best thing about the iOS 9.3 release

Anyone even remotely familiar with Apple knows that, alongside the new iPad and iPhone, there is a new version of iOS. The new software includes some great new features and updates, and at least one major feature that schools have been wanting for a long time.

There is a new Night Shift mode that may help users get better sleep. Notes can now be locked (passcode or Touch ID). Stills can now be extracted from Live Photos. There is even multi-user support for schools with shared iPads!

The multi-user support is something that we will want to explore as soon as possible. I know there are many schools that have wanted this feature for a long time. I still haven't had a chance to investigate exactly how it works, or how it impacts the deployment process, but that's where the actual best thing about the iOS release comes in. It's the timing.

Major iOS updates are typically delivered with the release of the new flagship iPhone. The problem is that, since the iPhone 4S, that has happened in the fall, just after all of a school's iPads have been deployed. Administrators are left scrambling trying to figure out the impact of the new software. Worse, Apple has made it easy for the user to go ahead and update devices, even if the administrator doesn't yet know the impact of (problems with) the update.

Now, I know this is just a "point release", and I know that iOS 10 (X?) will probably still be released in the fall. I'm just glad that this update, with such a major feature for schools, isn't landing after a new school year has just barely begun. Sure, May or June would be best, but September and October are probably the worst possible time for a new iOS release.

Training Challenges in the North

Iqaluit, Nunavut.

In February.

When I was first asked if I would be available to provide SMART Notebook training to teachers in Iqaluit, my main concern was that I did not have the gear to handle Canada's far north in the middle of winter. Sure, I had a parka, some gloves, and boots. That isn't uncommon for Canadians.

But there's a pretty big difference between winter in southern Ontario and northern Canada.

As it turns out, the weather wasn't nearly the challenge I thought it would be (even though my flight out did get cancelled due to a blizzard). I picked up some better boots, better mittens, a balaclava, and some snow pants, and ended up walking around quite a bit while in Iqaluit. It was a great experience, and I only fell through the snow once!

The real challenge of Iqaluit, from an educational technology training perspective, is the state of the Internet.

The Internet speed at the hotel would lead me to click on a web link, walk away to do something else, and come back to the computer a couple of minutes later. The speed at the school wasn't any better. In fact, the school Internet was further impacted by the government filters. I have to wonder how long it will take for officials to realize that the filters are increasingly ineffective, especially as students begin to bring their own data-enabled devices into the classroom. The filters also end up blocking useful teaching tools and valuable information (some of the SMART-related resources appeared to be blocked). SMART Response worked, but not particularly well and would not be usable for more than a handful of questions. To SMART's credit, the question web pages are actually quite small. Unfortunately, the school's Internet connection is so slow, the question pages would still take up to a minute to load on student devices. There is another delay between the student clicking to submit a response, and the response being "received" by the teacher.

Surprisingly, SMART Maestro, the iPad-enabled feature of Notebook, ran smoothly. This must mean that most or all of the network traffic required to mirror the SMART Board to the iPad must stay on the local network.



On my third day of training, I asked the teachers what their strategies were for integrating Internet-based materials into the classroom. In unison, several teachers replied, "We don't". This may seem like a shocking response in the 21st century, but it isn't a surprise once you've tried using the Internet in the school for a few days.

So, the solution could be to pre-download resources from home. The teachers did comment that their Internet speed at home was quite a bit better than the Internet at the school. This was a solution used to a limited degree by some teachers, but there was another problem. It seems that the best deal for Internet in Iqaluit only includes roughly 40GB of monthly data, and each additional GB is $15! Ouch! I can barely stay below my 275GB monthly allotment and have considered paying the additional $10/month to get unlimited bandwidth. That's great for me, but there is clearly a problem with "Internet equity" in Canada.

The CRTC is currently soliciting input on broadband connectivity in Canada. The completed questionnaires must be submitted by February 29, 2016, so go participate as soon as possible (but please just read a little further first).

Before you respond to that poll, just take a few moments. Forget about Netflix. Forget about iTunes. Think about your own child not having access to the Internet to research a school subject. Consider that other students across the country have relatively easy access to resources like Homework Help, Khan Academy, and a variety of other online learning resources. Many school districts are moving to Google Apps or Office 365, tools that help enable collaboration and 21st century skills. From what I experienced in Iqaluit, these tools would be virtually unusable.

Apple, FBI, ISIS, and Secrets

This goes significantly off-topic from what I normally talk about, but still revolves around technology (and even touches on the potential impact on education). The news about the FBI demanding that Apple unlock the phone of one of the San Bernardino gunmen is everywhere, and the FBI using the suffering of the victims' families to get what they want is not only immoral, it is irresponsible.

One of the most common arguments that comes up regarding encryption and secrets is that if you aren't doing anything wrong, you have nothing to hide. This could not be further from the truth. Many businesses around the world depend on trade secrets, or keeping secret the development and progress of new products and technology. Law enforcement agencies may be protecting the identities of undercover agents, witnesses, or victims. You know, agencies like, say, the FBI. Can you say you have nothing to hide while still demanding answers about the breaches in security at Target, Neiman Marcus, and Michaels? More in line with education, schools and districts must also be sure they are keeping student information secure and private. This is not just something that should be done, but something that must be done. We all have "something to hide", even if we're not doing anything wrong.

The FBI is claiming they hope to discover information on the phone; information that will help prevent other terror attacks. This is highly unlikely, and the FBI knows it. The San Bernardino gunmen were a man and a woman. Islamic extremists (ISIS, Taliban) do not use women as "soldiers". This act of terror appears to have been "ISIS-inpired", but that is very different from "ISIS-plotted". The FBI can get access to phone records, even without access to the phone. They likely already have a good idea who the gunmen were in contact with, and there is little else they could discover from the phone itself.

Asking Apple to try to create a method to circumvent security measures puts far more people at risk than any possible gain from unlocking this one phone.

There is a belief that the burden on, or cost to, Apple to circumvent the security of the phone is relatively small because they are such a large and wealthy company. Again, this could not be further from the truth. If Apple is successful in gaining access to the phone, it calls into question, at least from the perspective of the public, the actual security of Apple's products. Apple could potentially lose contracts for large-scale deployments to government agencies, businesses, and yes, even school districts. The public perception of ineffective security could also cost Apple consumer sales. There are so many costs that go beyond the simple costs related to the hours required for Apple's developers to gain access to the phone's contents.

There isn't anything the FBI can do to bring back the victims of the attack, and it is disturbing that they are using the grief of the victims' families to advance some hidden and unrelated agenda.

Waiting on the next big thing

After recording the podcast following FETC this year, our group pondered why we didn't really see any major new technology.

I suggested that it might be related to the difficulties the major processor fabrication companies are having shrinking the chips used in our electronics. I quickly realized that this was a topic that my colleagues really had little knowledge of, and that most users of technology probably don't know much about the chips inside the gadgets we use every day.

This post is not intended to be an in-depth technical discussion. Hopefully I can provide a simple explanation of how our electronics have managed to get faster and do more things over the years, and give a quick overview of what is causing a slowdown in some areas of technology.

In 2006 Intel introduced the Core architecture of processors. These processors were manufactured on what Intel referred to as a 65nm (a nanometer is one-billionth of a meter) process. The 65nm process had also been used in the later Pentium 4 processors. 65nm represents a measure of the process, but some "features" in the process are larger than 65nm while others can be smaller.

Late in 2007, Intel began producing processors on a 45nm process. While some might interpret this as being roughly 70% of 65nm, processors are generally rectangular and have area. This means that the 45nm process can create an identical chip in roughly 48% of the space used by the 65nm process (45^2 / 65^2 = 47.92...). The scaling isn't quite perfect, so the chips don't shrink by the same amount as the process naming implies. Still, you can see that chip manufacturers can pack a whole lot more transistors into the same amount of space used by the older process. Reduced size is not the only advantage to new, smaller processes; smaller processes use less power and generate less heat. The reduced size also normally means that a chip as complex as "last year's" high-end chip can be produced at a lower cost.

In early 2010, just over two years after introducing the 45nm process, Intel released chips produced on the 32nm process (roughly 50% in size compared to 45nm). In mid-2012, Intel had started using a 22nm process (roughly 47% in size compared to 32nm). The first sign of trouble was with chips from Intel being produced at 14nm (40% of 22nm). Intel released a very limited number of 14nm chips, targeted mainly at low power laptops. Higher powered 14nm desktop and laptop chips did not show up until 2015. Intel's roadmap also now shows that products based on their next process (10nm) is not due until late 2017.

Intel is not the only chip-making company around. Other big players include TSMC and Samsung. Despite the public disputes between Apple and Samsung, the processors in most iPhones have actually been manufactured by Samsung. The latest iPhones have started using chips manufactured by TSMC. Samsung and TSMC have also started to struggle to make chips smaller. Some rumours suggested that with the iPhone 6 (and 6 Plus), Apple was taking so much (of a limited) capacity from TSMC that other tech companies could not get access to the latest process. AMD and Nvidia are the two major graphics chip designers, and have their graphics chips manufactured primarily by TSMC. Neither company released graphics chips using TSMC's 20nm process.

Limiting the latest and greatest manufacturing technologies to a handful of companies means that only those companies have the potential to make noticeable improvements, but they may not be under pressure to do so. Apple seems to have capitalized on their nearly exclusive access to TSMC's advanced process. Benchmarks for the iPhone 6, and again with the 6S models, showed significant improvements in performance. Note that the iPhone is under competitive pressure from Android smartphones. Intel on the other hand faces little competition in their primary market of computer processors. Intel not only designs the processors, but also owns the manufacturing facilities for their processors. The performance improvements in processors from Intel have been relatively small (5-10% from generation to generation).

What about technology other than smartphones and computer processors?

We are starting to hear more about VR (virtual reality) and AR (augmented reality). Oculus, probably the most recognizable name in VR, announced the system requirements for the Rift VR headset. The cost of building a system to meet those requirements is quite high. Here is a quote from that page, highlighting the importance of the GPU (Graphics Processing Unit).
Today, that system’s specification is largely driven by the requirements of VR graphics. To start with, VR lets you see graphics like never before. Good stereo VR with positional tracking directly drives your perceptual system in a way that a flat monitor can’t. As a consequence, rendering techniques and quality matter more than ever before, as things that are imperceivable on a traditional monitor suddenly make all the difference when experienced in VR. Therefore, VR increases the value of GPU performance.
Remember that AMD and Nvidia are the major source of graphics chips, and that they likely did not get access to 20nm? Relatively few computers meet the graphics requirements of the Rift.

Other areas of technology may also have been stalled by limited access to the newest chip manufacturing processes. Nvidia makes the chips in the tablet for Google's Project Tango, a computer-vision platform for detecting objects (think self-driving cars). This technology is relevant for robotics, a topic I discussed in the podcast.

While the trend toward a slowing in technological advancement continues, more companies are finally getting access to the latest manufacturing processes. AMD and Nvidia are planning products based upon 14nm and 16nm for release in 2016. AMD has stated that their upcoming graphics chips will make the largest leap in performance per watt in the history of the Radeon brand (AMD's primary graphics brand, introduced in 2000).

Hopefully this means we will see some new and really interesting tech at conferences next year.

Reflections on FETC 2016

This was my fourth trip to Orlando to attend FETC, and there were some notable differences from previous years. Our group was significantly larger than in previous years, and included faculty, staff, masters students, a PhD student, and representatives from companies that work closely with us. We wrapped up FETC with a brief podcast. I will expand on my comments in that recording, and talk about a some other things I noticed at FETC 2016.

When talking about the conference itself, the layout and size were noticeably different. The exhibit hall stretched from north to south, with the keynote area at the "back" of the convention center. The exhibit area was definitely smaller than it had been in previous years, but still large enough to keep attendees busy exploring booths.

As noted by my colleagues in the podcast, there wasn't much that was particularly revolutionary or innovative to be found at FETC. This seems to be a reflection of the market in general. We all seem to be waiting for the next "big thing".

While not exactly new, this seemed to be the year of the robot and maker spaces. I was particularly intrigued by Ozobot. I believe this is a great way to introduce young children to basic coding skills. The Ozobot will follow a path drawn out by magic markers, and simple instructions can be given to the Ozobot by simply alternating the colours drawn along the path. While a great implementation, I believe there are two challenges. First, what is the next step after the Ozobot? Once a child has mastered the instructions and "played", the Ozobot itself cannot go beyond its very basic programming. Second, the price tag of $50 USD is quite steep for such a simple robot that likely won't see much classroom time. A class set of 18 is $1000, which is not really a deal at all. Some extras are thrown in, but you give up the value of 2 Ozobots to get the extras. If the Ozobot was $20 USD, with a 25-unit bundle (with extras) at $500, I would be more excited.

Sessions and conversations around maker spaces almost always include, or even focus on, the topic of 3D printing. There were a few booths showcasing 3D printers, but it is interesting that none were from the "big players" (Epson, Canon, HP, etc). It does lead to concern about acquiring a device from a company that might not be around next year.

One "throw back" at FETC was typing instruction. There were several booths focusing on teaching typing skills. I have been told that this is a response to poor results in online tests where students that know the content are still doing poorly because they cannot type quickly enough to finish on time. I imagine these skills are also valuable for collaborative work on Google Docs or Office 365.

I have still been considering the question about what I hope or expect to see in the future for educational technology. Other recent events, including CES, showcased quite a bit in the VR/AR (virtual reality/augmented reality) space. I only saw a little of this at FETC. I know the system requirements for Oculus Rift are fairly demanding, and it is also very expensive. If that was the only option, I would understand why it didn't make an appearance at FETC, but Google Cardboard seems a reasonable choice for VR in the classroom. Hopefully we see more immersive and interactive uses of Cardboard soon.

Remote Student Participation

On Wednesday we learned that one of our students would need to participate in classes remotely. Starting Monday.

Of course the first suggestion volunteered to me was, "Can't we just Skype the student in?" Our classes are not standard university undergraduate lectures. Our instructors are typically modelling the K-12 classroom. They move around quite a bit, and the students participate in small group activities. Skype running on a stationary device was not going to work.

I had a pretty good idea that what I really wanted was a VGo, but there was no way we were getting the funds for that. Even if, by some miracle, we managed to convince "the powers" to buy a VGo, it was virtually impossible that the convincing, purchasing, delivery, and setup would happen before Monday morning.

A couple of years ago I discovered Swivl at an Ed Tech conference (I honestly can't remember which one). I encouraged our Instructional Resource Centre to purchase a couple of them for use by our students for their micro-teaching videos. The students record themselves delivering a lesson activity, and then review it to evaluate and adjust their teaching methods. The students would often setup cameras on tripods, or ask another student to do the recording. Neither method was ideal. A tripod did not allow the student to move around, and audio was troublesome in both scenarios.

With Swivl, the "teacher" wears a wireless tracker (with integrated microphone), and the Swivl base turns and pivots to follow the tracker. The recording device (typically a smartphone or tablet) sits on the base. A single, short audio cable connects the base to the device to record the audio from the mic integrated into the tracker. It really is impressive in its simplicity, and works quite well.

The problem is that Swivl's primary use and design is around recording lesson activities, not video conferencing. The Swivl base connects to the recording device using a male-to-male, 4-segment 3.5mm cable. This is a fairly standard plug found in pretty much every smartphone and tablet. It carries both the mic-in and audio out. Unfortunately, this cable runs directly from the Swivl base to the device, with no splitter or plug in the base for the audio out.

Our initial tests using Lifesize Video (the standard video conferencing solution used by our university) and an iPad confirmed that audio was being recorded from the mic in the tracker, but no audio would play back unless the base from the cable was unplugged from the iPad.

We decided to try a 3.5mm 4-segment to 2 x 3.5mm 3-segment splitter.

Adapter to break out the mic in and audio out connections
We actually had to use two of these adapters. One was used to convert the 4-segment mic out from the Swivl base to a standard 3-segment mic line. The second was connected to the iPad allowing us to plug in the mic from the Swivl base, and a set of external speakers.


Swivl video conference cart
Our Swivl telepresence setup

With everything plugged in, we started a Lifesize Video session and everything worked! The final bit was putting everything on a cart that could be easily moved between classes, taping together some of the cabling (to try to prevent instructors/students from unplugging cables from splitters), zip-tying some of the cables to tidy it up, and labeling plugs that couldn't easily be taped in place ("to iPad").

It would be nice to have the cart completely wireless, but we settled on a single power cord. The Swivl has a 4-hour battery life (estimate), and the student has back-to-back classes that total 5 hours. We also didn't have battery-powered speakers.

It would also be better if the remote student could control the direction of the Swivl rather than relying on the tracker, especially during the small group sessions. This is a feature of Swivl Cloud Live. Swivl Cloud Live is in beta, and I did submit the form to sign up. I see more experimenting in the next couple of weeks.

Friday morning we conducted a test session with the student and all went well. The first class is Monday morning. Fingers crossed.