The Usage Of 5G IoT Radio Network Services

With the coming of 5G IoT radio network services, we are embarking on an overhaul of the global communications infrastructure — effectively replacing one wireless architecture created this century, with another one that aims to lower energy consumption and maintenance costs.

The principal stakeholders of 5G wireless technology — telecommunications providers, transmission equipment makers, antenna manufacturers, and server manufacturers — are all looking to deliver on the promise that, once all of 5G’s components are fully deployed and operational, cables and wires will become a thing of the past, when it comes to delivering communications, entertainment, network connectivity, and a host of other services.

In this article, we’ll be looking at how close they are to making good on this promise, by examining some of the existing and proposed use cases for 5G IoT radio network services.

What Is 5G?

5G is the fifth generation of mobile network technology. Each generation has its own defining characteristics such as frequency bands, advances in transmission technology, and bit rates.

The first generation or 1G was introduced in 1982. It was never an official standard, though several attempts were made to standardise digital wireless cellular transmission, none of which became global. 2G / GSM launched in around 1992, at the same time as much of the world was adopting CDMA. The global standards community finally came together with their 3rd Generation Partnership Project (3GPP), and 3G appeared in the early 2000s. 4G was standardised in 2012.

At maximum performance, 4G networks have a theoretical download speed of 1 gigabyte per second. 5G networks start at 10 gigabytes per second, with a theoretical maximum of 20 gigabytes per second or beyond. 5G also offers lower latency or network lag, which essentially means less time for information to travel through the system.

In addition to raw speed and stability, 5G offers a form of segmentation known as network slicing. This allows for multiple virtual networks to be created on top of a shared physical infrastructure. It also provides the ability to span across multiple parts of a network, such as the core network, transport layer, or access network. Each network slice can create an end to end virtual network, with both compute and storage functionality.

What Are 5G IoT Radio Network Services?

Internet of Things or IoT devices and platforms use a variety of wireless technologies, including short-range technology of the unlicensed spectrum, such as Wi-Fi, Bluetooth, or ZigBee, and technologies from the licensed spectrum, such as GSM and LTE. The licensed technologies offer a number of benefits for IoT devices, including enhanced provisioning, device management, and service enablement.

The emerging licensed technology of 5G IoT radio network services provide a range of opportunities to the IoT which are not available with 4G or other technologies. These include the ability to support a massive number of static or mobile IoT devices, which have a diverse range of speed, bandwidth, and quality of service requirements.

Much of the global 5G plan involves multiple, simultaneous antennas, some of which use a spectrum that telecommunications providers agree to share with each other. Other parts of the deployment will include portions of the unlicensed spectrum that telecommunications regulators will keep open for everyone at all times. For this reason, some of the 5G technologies include systems that will enable transmitters and receivers to arbitrate access to unused channels in the unlicensed spectrum.

Most of these 5G IoT radio network services can be grouped under three main categories: enhanced mobile broadband (eMBB), massive IoT (also known as massive Machine Type Communications or mMTC), and critical communications.

Enhanced Mobile Broadband (eMBB)

Enhanced mobile broadband (eMBB) under 5G will have the capacity to support large volumes of data traffic and large numbers of users, including IoT devices. Some estimates put this capacity at a minimum of 100GB per month per customer, greatly expanding the consumer IoT market by delivering high-speed, low-latency, reliable, and secure connections. In addition, the cost of data transmission per bit is set to decrease, making the prospect of “unlimited” data bundles finally feasible.

Enhanced mobile broadband will support the delivery of high definition video at the consumer level (e.g., for TV and gaming), and immersive communications, such as video calls and augmented reality and virtual reality (AR and VR). Some predictions for 5G latency put it as low as 1 millisecond between a device and its base station, increasing the prospects for fingertip control over remote assets (the so-called “tactile internet”), and high definition video conferencing. It will also facilitate data transfer for smart city services, including IoT video cameras for surveillance.

eMBB is intended to service more densely populated metropolitan areas with download speeds approaching 1 Gbps (gigabit per second) indoors, and 300 Mbps (megabits per second) outdoors. This will require the installation of extremely high-frequency millimetre-wave (mmWave) antennas throughout the landscape, potentially numbering in the hundreds or even thousands.

For more rural and suburban areas, enhanced mobile broadband is looking to replace the 4G LTE system, with a new network of lower-power omnidirectional antennas that provide a 50 Mbps download service.

Massive IoT (mMTC)

Massive Machine Type Communications or mMTC allow machine-to-machine (M2M) and internet of Things (IoT) applications to operate without imposing burdens on other classes of service. 3GPP narrowband IoT (NB-IoT) and Long-Term Evolution Machine Type Communications (LTE-M) are existing technologies which are integral to the new breed of 5G era fast broadband communications.

These 4G technologies are expected to continue under full support in 5G networks for the foreseeable future. They are currently providing mobile IoT solutions for smart cities, smart logistics, and smart metering. As 5G evolves, they will be used to access multimedia content, stream augmented reality and 3D video, and to cater for critical communications like factory automation, and smart power grids.

mMTC maintains service levels by implementing a compartmentalised service tier for devices that require a download bandwidth as low as 100 Kbps, but with latency that’s kept low at around 10 milliseconds.

Critical Communications

For critical communications requirements where bandwidth matters less than speed, Ultra Reliable and Low Latency Communications (URLLC) technology can provide an end-to-end latency of 1 millisecond or less. This level of service would enable autonomous or self-driving vehicles, where decision and reaction times have to be near instantaneous. For enterprises, the extreme reliability and low latency of 5G will allow for smart energy grids, enhanced factory automation, and other demanding applications with rigorous requirements.

URLLC actually has the potential to make 5G competitive with satellite, which opens up the possibility of using 5G IoT radio network services as an alternative to GPS for geographical location.

A supplementary set of 5G standards called “Release 16” that was scheduled for the end of 2019 includes specifications for Vehicle-to-Everything (V2X) communications. This technology incorporates low-latency links between moving vehicles (especially those with autonomous driving systems) and cloud data centres. This enables much of the control and maintenance software for moving vehicles to operate from within static data centres, staffed by human personnel.

Use Cases For 5G IoT Radio Network Services

Industry analysts reckon that the initial costs of 5G infrastructure improvement could be astronomical. In order to cover themselves financially, telecommunications companies and other stakeholders in the ecosystem will need to offer new classes of service to new customer segments.

A number of use cases currently exist or are on the horizon.

Cloud And Edge Computing

The wireless technology of 5G IoT radio network services offers the potential for distributing cloud computing services much closer to users than most of the data centres operated by major players like Amazon, Google, or Microsoft. For critical, high-intensity workloads, this could make 5G service providers viable competitors as cloud providers.

Similarly, by bringing processing power closer to the consumer and minimising the latency caused by distance, 5G becomes a vehicle for edge computing environments, where data handling has to occur as close to devices and applications as possible. With latency reductions of a sufficient magnitude, applications that currently require desktop systems or laptops could be relocated to smaller and more mobile devices with significantly less on-board processing power.

Automotive Industry

The high bandwidth connectivity of 5G can provide a seamless and high quality of service for vehicle navigation, infotainment, and other services in standard and autonomous vehicles. Low latency and high bandwidth connections can support vehicle platoons, which improve fuel efficiency and reduce the number of drivers required on the road.

Near-zero latency is enabling the development of driverless or autonomous vehicle technology, while network slicing provides road and infrastructure managers with a greater degree of flexibility, and the option to allocate network slices to specific functions.

(Image source: GSMA)

While it’s unlikely that there will be mass adoption of fully autonomous vehicles on public roads for some years to come, connected and smart vehicles are becoming increasingly popular. For instance, 75% of cars shipped in 2020 in Australia are likely to be have some form of connectivity.

Media And Content Delivery

The high bandwidth and low latency of 5G enable the high volume transmission of high definition video in real time. This makes both video conferencing and streaming entertainment faster and more engaging for the participants. These activities also become more versatile, as 5G can support live broadcasting using smartphones, and interactive and immersive VR experiences.

5G Fixed Wireless Access (FWA) systems allow home broadband services to be set up quickly and cost-effectively in rural and other areas that don’t have access to fixed line home broadband. FWA can deliver speeds similar to fibre-based services, at a considerably lower cost (around 74%) than wired connections.

Coupled with edge computing, the low latency and high bandwidth of 5G can enhance the cloud gaming experience, with the edge processing of large volumes of data reducing the need for more powerful AR / VR headsets. Similar enhancements using augmented and virtual reality are enabling organisations in the retail sector to create memorable customer experiences.


5G IoT radio network services allow the connection of large numbers of devices in a secure and cost-efficient manner, while low latency connectivity enables the virtual control of machines. Fewer processing units are therefore required on the factory floor, while telemetry or information exchange can occur between a large number of interconnected devices in real time.

As with the automotive industry, network slicing allows manufacturers to allocate network slices to specific functions, and a combination of cloud computing, eMBB, and mMTC can facilitate the transmission of real time information at high resolutions.

Health Care

Cables and wires in operating theatres could be replaced by the low latency and secure wireless connections made possible through 5G. For hospital administration, data analytics across medical records will have improved efficiency, while AR and VR delivered via low latency and high bandwidth 5G can aid in diagnosis, and the training of medical staff. Remote real-time diagnostics can also be enhanced by delivering high quality video over 5G.

In future, 5G IoT radio network services may even power robots for the dispensing of pharmaceuticals, support diagnostics, and performing certain types of surgery.

Smart Cities

5G IoT networks have the potential to aid in city management, for example, through the deployment of city-wide air quality monitors, and alert systems for health and safety hazards. The mass digitalisation of some public services is possible, and the use of connected vehicles for police and emergency services, linked to traffic lights.

Network slicing will allow city managers to provide higher security and reliability for mission-critical services.

Smart Utilities

Improved edge computing will enable utilities providers to better scale their number of connected devices, and deploy platforms and analytics capable of handling the increased data volumes in real-time.

5G wireless could provide a flexible and cost-effective alternative to last mile fibre, and assist the longer term management of complex virtual energy production plants.

Looking Ahead

While 5G wireless will do away with much of the cabling architecture of current cities, the platform’s requirements for short-range infrastructure — numerous small, low power base stations containing the transmitters and receivers — will create a new and characteristic form of landscape.

The 5G mobile cellular networks in use today are evolving from existing 4G networks, which will continue to serve many functions. Moving forward, 5G IoT radio network services providers will need to ensure that their networks support both current and future use case requirements.

What’s New With BLE5? And How Does It Compare To BLE4?

Since its introduction in 1998, Bluetooth wireless has carved a niche as one of the principal technologies enabling users to connect phones or other portable equipment together. Heralding the next phase in the evolution of this technology is Bluetooth 5.0 or BLE5, the latest version of the platform, and a Low Energy (LE) variant that brings significant advantages over its predecessor, BLE4.

Drawing on updated forecasts from ABI Research and insights from several other analyst firms, the Bluetooth® Market Update 2020 examines the growth and health of the Bluetooth SIG member community, trends and forecasts for each of the key Bluetooth wireless solution areas, and predictions, trends, and opportunities in Bluetooth vertical markets.

(Image source:

According to this year’s report, annual Bluetooth enabled device shipments will exceed six billion by 2024, with Low Energy technologies contributing to much of this activity. In fact, Bluetooth Low Energy (LE) technology is setting the new market standard, with a Compound Annual Growth Rate (CAGR) of 26%.

(Image source:

By 2024, 35% of annual Bluetooth shipments will be LE single-mode devices, and with the recent release of LE Audio, forecasts indicate that Bluetooth LE single-mode device shipments are set to triple over the next five years.

Within the Bluetooth LE market, BLE5 is making its mark on the Bluetooth Beacon and Internet of Things (IoT) sectors, creating new opportunities in areas such as Smart Building, Smart Industry, Smart Homes, and Smart Cities using mesh connections.

Some Bluetooth Basics

Before considering how BLE5 compares to what’s come previously, we’ll give you a basic understanding of the technology involved, and how it has evolved to its current level.

Bluetooth is both a high speed, low powered wireless technology and a specification (IEEE 802.15.1) for the use of low power radio communications that can link phones, computers and other network devices over short distances without wires.

Links are established via low cost transceivers embedded within Bluetooth-compatible devices. The technology typically operates on the frequency band of 2.45GHz, and can support up to 721KBps of data transfer, along with three voice channels. This frequency band has been set aside through international agreement for the use of industrial, scientific, and medical devices.

Standard Bluetooth links can connect up to eight devices simultaneously, with each device offering a unique 48 bit address based on the IEEE 802 standard. Connections may be point to point or from a single point to multiple points.

A Bluetooth Network consists of a Personal Area Network or piconet, which contains a minimum of two to a maximum of eight Bluetooth peer devices — usually in the form of a single “master” and up to seven “slaves.”

(Image source:

The master device initiates communication with other devices, and governs the communications link and data traffic between itself and the slave devices associated with it. A slave device may only begin its transmissions in a time slot immediately following the one in which it was addressed by the master, or in a time slot explicitly reserved for its use.

How Bluetooth Has Evolved

In 1998, the technology companies Ericsson, IBM, Nokia, and Toshiba formed the Bluetooth Special Interest Group (SIG), which published the first version of the platform in 1999. This first version could achieve a data transfer rate of 1Mbps. Version 2.0+EDR had a data speed of 3Mbps, while version 3.0+HS stepped up its speed of data transfer to 24 Mbps.

(Image source: Amar InfoTech)

Which brings us to versions 4 and 5.

How BLE5 Compares To BLE4

Versions 1 to 3 of the platform operated via Bluetooth radio, which consumes a large amount of energy in its work. Bluetooth Low Energy technology or BLE was originally created to reduce the power consumption of Bluetooth peripherals. It was introduced for Bluetooth 4.0 and continued to improve through the BLE4 series, whose last version was 4.2.

Design and performance-wise, BLE5 has the edge over BLE4, in a number of different aspects.

1. Speed

BLE5 achieves a data transfer speed of 48MBps. This is twice that of BLE4. Bluetooth 5.0 has a bandwidth of 5Mbps, which is more than two times that of Bluetooth 4.2, whose maximum bandwidth is 2.1 Mbps. This effectively increases the data rate of BLE5 to 2Mbps, which allows it to reach a net data rate of about 1.4Mbps, if you ignore overheads like addressing. While this isn’t fast enough to stream video, it does permit audio streaming.

2. Range

The range of BLE5 is up to four times that of Bluetooth 4.2. A BLE4 solution can reach a maximum range of about 50m, so with Bluetooth 5.0 something in the vicinity of 200m is possible — though some researchers suggest that BLE5 can be connected up to 300 metres or 985 feet. These figures are for outdoor connections.

Indoors, Bluetooth 5 actively operates within a radius of 40 metres. Compare this with the 10m indoor radius of BLE4, and it’s clear that BLE5 has the advantage when it comes to using wireless headphones some distance away from your phone, for example, or for connecting devices throughout a house, as opposed to within a single room.

3. Broadcast Capability

Bluetooth 5 supports data packets eight times bigger than the previous version, with a message capacity of about 255 bytes (BLE4 has a message capacity about 31 bytes). This gives BLE5 considerably more space for its actual data load, and with more data bits in each packet, the net data throughput is also increased.

Largely because of the increased range, speed, and message capacity of BLE5, Bluetooth 5 Beacon has been growing in popularity.

4. Compatibility

In terms of compatibility, BLE4 works best with devices compatible with version 4 of the series, but will not work with devices that are using Bluetooth 5. BLE5 is backwards compatible with all versions of Bluetooth up to version 4.2 — but with the limitation that not all Bluetooth 5 features may be available on these devices.

5. Power Consumption

While both BLE5 and BLE4 are part of the Bluetooth Low Energy ecosystem, BLE5 has been designed to consume less power than its predecessor. So Bluetooth 5 devices can be left running for longer periods, without putting too much stress on their batteries.

Historically, this has been a particular problem with smart watches and devices with smaller form factors, like IoT sensors. With the redesigned power consumption system in Bluetooth 5, most such devices will increase their battery life

6. Resiliency

BLE5 was developed with the consideration that important processes involving Bluetooth most often occur in an overloaded environment, which negatively affects its operation. Compared to Bluetooth 4.2, BLE5 works much more reliably in overloaded environments.

7. Security

In April 2017, security researchers discovered several exploits in Bluetooth software (collectively called “BlueBorne”) affecting various platforms, including Microsoft Windows, Linux, Apple iOS, and Google’s Android. Some of these exploits could permit an attacker to connect to devices or systems without authentication, and to effectively hijack a complete device.

BLE5 has addressed much of this vulnerability, with bit-level security, and authentication controls using a 128bit key.

What This Means For Practical Applications Of BLE5

With its low power consumption, inexpensive hardware, and small form factors, BLE5 provides scope for a wide range of applications.

In previous iterations, Bluetooth Low Energy technology was largely used for storage, beacons, and other low-power devices, but came with some serious limitations. For instance, wireless headphones were unable to exchange messages under BLE4.

With Bluetooth 5.0, all audio devices can share data via Bluetooth Classic, and Bluetooth Low Energy is now more applicable for wearable devices, smart IoT devices, fitness monitoring equipment, and battery-powered accessories such as wireless keyboards.

BLE5 also includes a feature which makes it possible to recreate the sound on two connected devices (headphones, speakers, televisions, etc.) at the same time. Connected to a common “command centre”, each device can independently choose its information transfer priority — greater transfer speed, or increased distance over which devices can interact.

Bluetooth 5 also allows for serial connections between devices. So for components of the IoT, each device can connect to a neighbouring element, rather than having to seek out a distant command centre. This has positive implications for the scaling of larger IoT deployments.

At the domestic level, Bluetooth mesh networking is playing a key role in automating the smart homes of tomorrow. Major home automation platforms such as Alibaba and Xiaomi are developing Bluetooth mesh networks to meet a growing demand for device networks in the home.

Mesh networking is also providing a foundation for commercial lighting control systems supported by innovators like Osram, Murata, Zumtobel, and Delta Electronics. These systems employ Bluetooth mesh networking to create large-scale device networks that can act as the central nervous system of a building. Applications span the retail, tourism, and enterprise sectors, and can even help organisations establish a platform that enables advanced building services, such as asset tracking.

At the consumer level, Bluetooth LE Audio under BLE5 now has enhanced performance, which has enabled support for hearing aids, and introduced Audio Sharing. This platform enhancement enables the transmission of multiple, independent, synchronised audio streams, providing a standardised approach for developers to build high-quality, and truly wireless ear buds. And the new Broadcast Audio feature enables a source device to broadcast an audio stream to an unlimited number of audio sink devices, opening up new opportunities for innovation.

So the evolution of Bluetooth from BLE4 to BLE5 sees performance improvements that go beyond increased data rates, wider range, and more broadcast capacity. And applications for now and the future may include IoT, smartphones, Bluetooth beacons, and numerous other devices.

Kristofer Månsson

Kristofer is an outgoing person who brings a positive attitude to the group and will not quit until the job is done. Lately he has gradually shifted his interests from the technical aspects of software development towards business and project management, with a stronger emphasis on leadership in advanced technical projects.

He is an experienced development manager in most aspects of system development. His long time developing software systems, as well in the front-end as in the back-end, gives him a unique profile when managing software development teams. 

He has experience from roles as CTO, Project Manager, Team Leader, Scrum Master, Advisor, Frontend Developer and System Developer.


Johan Lövgren

Johan is an extremely dynamic and energetic person who brings drive and competence to the team. As an entrepreneur since several years back he has shown evidence for being capable of delivering great results within record time and with a small budget. His results has successfully been released to the market and resulted in known products.

His broad competence ranges from design aspects to electronics developments and embedded coding. This is a result from his personality as well as his MSc. degree from Chalmers University of Technology.

Andreas Angervik

Andreas is a person with a great feeling for humor combined with great skills in Java and backend development technologies. He has during his time with Vinnter shown evidence for being capable of both delivering great results and at the same time help the rest of us to grow in competence through his knowledge sharing mentality. 

He is, and will become even more so, one of our AWS cloud experts.

The Evolution of Micro Processing Units (MPUs)

Microprocessors or micro processing units (MPUs) are nothing short of amazing. By integrating the complete computation engine onto one electronic component, the computing power that once required a room full of equipment can now be fabricated on a single chip, usually about the size of a fingernail, though can indeed be much smaller still.

Serving as the central processing unit (CPU) in computers, microprocessors contain thousands of electronic components and use a collection of machine instructions to perform mathematical operations and move data from one memory location to another. They contain an address bus that sends addresses to memory, read (RD) and write (WR) lines to tell the memory whether it wants to set or get the address location, and a data bus that sends and receives data to and from memory. Micro processing units also include a clock line that enables a clock pulse to sequence the processor, and a reset line that resets the program counter and restarts execution.

(Basic micro processing unit. Image source:

The microprocessor is at the very core of every computer, be it a PC, laptop, server, or mobile device, serving as the instrument’s “brain”. They are also found in home devices, such as TVs, DVD players, microwaves, ovens, washing machines, stereo systems, alarm clocks, and home lighting systems. Industrial items contain micro processing units, too, including cars, boats, planes, manufacturing equipment and machinery, gasoline pumps, credit card processing units, traffic control devices, elevators, and security systems. In fact, pretty much everything we do today depends on microprocessors – and they are, of course, a fundamental component of Internet of Things (IoT) and Industrial Internet of Things (IIoT) devices which are becoming more and more prevalent in homes and crucial to businesses all over the globe.

It’s safe to say that these tiny pieces of equipment have had – and will continue to have – an enormous influence technologically, economically, and culturally. But where did micro processing units first originate, and what can we expect from them in the future?

A Brief History of Micro Processing Units

The very first commercially available micro processing unit was the Intel 4004, released by Intel Corporation way back in 1971. The 4004 was not very powerful, however, and not very fast – all it could do was add and subtract, and only 4 bits at a time. Even so, it delivered the same computing power as the first electronic computer built in 1946 – which filled an entire room – so it was still impressive (revolutionary, in fact) that everything was on one tiny chip. Engineers could purchase the Intel 4004 and then customize it with software to perform various simple functions in a wide variety of electronic devices.

(The Intel 4004. Image source:

The following year, Intel released the 8008, soon followed by the Intel 8080 in 1974 – both 8-bit microprocessors. The 8080 was commercially popular, and could represent signed numbers ranging from -128 to +127 – an improvement over the 4004’s -8 to +7 range, though still not particularly powerful, and so the 8080 was only used for control applications. Other micro processing units, such as the 6800 from Motorola and the z-80 from Zilog were also popular at this time.

The third generation of 16-bit micro processing units came between 1979 and 1980, and included the 8088, 80186 and 80286 from Intel, and the Motorola 6800 and 68010. The speeds of these microprocessors were four times faster than their second-generation predecessors.

(Table of various microprocessors Intel has introduced over the years. Image source:

The fourth generation of 32-bit microprocessors were developed between 1981 and 1995. With 32-bit word size, these processors became very popular indeed as the CPU in computers. In 1993, following a court ruling two years earlier which prevented Intel from trademarking “386” as the name of its then most powerful processor, the company released the 80586 by the name Intel P entium, opening a new era in consumer microprocessor marketing. No longer were processors referred to solely by numbers, but instead carried a brand name, and the trademarked “Pentium” soon became something of a status symbol amongst computer owners.

The fifth generation arrived in 1995 with high-performance and high-speed 64-bit processors. As well as new versions of Pentium, over the years these have included Celeron, Dual, and Quad-core processors from Intel, and many more from other developers including Motorola, IBM, Hewlett Packard and AMD. See Computer Hope’s “Computer Processor History” for an extended list of computer processors over the years, or the Wikipedia entry “Microprocessor Chronology”.

The Future of Microprocessors

As time and technology advance, microprocessors get increasingly powerful. Today, nearly all processors are multi-core, which improves performance while reducing power consumption. A multi-core processor works in exactly the same way as two or more single microprocessors. However, as a multi-core processor only uses one socket within the system, there is a much faster connection between the processor and the computer. Intel remains the strongest competitor in the microprocessor market today, followed by AMD.

Micro processing units have also gotten smaller and smaller over the years as well. In the 1960s, computer scientist and Intel Co-Founder Gordon Moore made an interesting observation – that every twelve months, engineers were able to double the number of transistors on a square inch piece of silicon. This held true for about ten years, then in 1975, Moore revised his forecast for the next decade to a doubling every 24 months – which indeed proved to be more or less accurate until around 2012.

(Image source:

However, we’re now starting to reach the physical limits for how small transistors can get. Up until recently, the industry standard was 14 nanometres (1nm = one-billionth of a metre). Then came Apple’s Bionic processor – which powers the iPhone XR, XS, and XS Max – measuring in at 7nm. Since then, IBM has been experimenting with 5nm chips, and researchers at MIT and the University of Colorado have developed transistors that measure a record-setting 2.5 nm wide.  

However, Moore’s Law cannot continue ad infinitum for a number of reasons. For starters, we must consider the threshold voltage– i.e. the voltage at which the transistor allows current to pass through. The problem lies in the imperfection of a transistor behaving as a switch, as it leaks some small amount of current when turned off. The situation worsens as the density of transistors on a chip increase. This puts a grave toll on transistor performance which can only be maintained by increasing the threshold voltage. As such, though transistor density may be increased, it would provide comparatively little improvement in speeds and energy consumption, meaning the operations performed on new chips would take, more or less, the same time as the chips used today – unless a better architecture is implemented to solve the problem.

Due to such limitations placed on microprocessors today, researchers are exploring new solutions and new materials in place of silicon – such as gallium oxide, hafnium diselenide, and graphene – to keep building the performance of microprocessors.

Single-Board Microcomputers

As central processing units shrink, so too do computers themselves. Over the last 60 or so years, the computer has evolved from a machine that filled an entire room to a device that can fit neatly in your pocket. And just as electronics have shrunk, so too has the price.

Today, for just a handful of dollars consumers can purchase single-board microcomputers – about the size of a credit card – with some pretty impressive communicative options, multimedia capabilities and processing power. One of the machines at the vanguard of this low-cost, high-power, small-size computing revolution is the Raspberry Pi, launched by the Raspberry Pi Foundation in 2012 as a $35 board to promote teaching of basic computer science in schools and developing countries. 

Raspberry Pi Family Photo. (Image source:

The original Raspberry Pi – the Pi 1 Model B – had a single-core 700MHz CPU with 256MB RAM. There have been several iterations and variations since that initial release, however, with the latest model – the Pi 4 Model B – boasting a quad core 1.5GHz CPU with up to 4GB RAM. All models can be transformed into fully-working computers with just a little bit of modest tinkering plus your own keyboard, mouse and monitor – users have even had success using a Raspberry Pi as a desktop PC for regular office work, including web browsing, word processing, spreadsheets, emailing, and photo editing. All for under $55. 

(Imager source:

Of course, Raspberry Pi has competitors – most notably Arduino, a company that produces single-board microcontrollers (using a variety of microprocessors) that can be used to design and build devices that interact with the real world. Both Raspberry Pi and Arduino devices are widely used in development and prototyping projects, particularly for IoT devices. They are, however, much in the domain of the hobbyist – craftsmen and women trying their hand at creating useful everyday tools such as remote controls for garage doors and thermometer devices, as well as more fun projects like gaming consoles, robots and drones. There are also various other proprietary development boards available from companies such as ST Microelectronics and Texas Instruments.

While all of these development kits are good for prototyping, they are less suitable for mass production. Here at Vinnter, we use both Raspberry Pi and Arduino devices in development projects where embedded systems need to be adopted. The challenge, however, comes when we move from the prototype to an industrialized product development project for two main reasons – cost and size.

Though Raspberry Pi and Arduino units are relatively cheap for the consumer market, up to $35 a throw is simply not viable when planning for 10,000 or even 100,000 units to be produced for inclusion in other products. The other problem that comes from a development kit like Arduino or Raspberry Pi is the large size. True, these devices are impressively small for what they are – but because they include additional functions and features (such as USB connectors, ethernet connectors, HDMI connectors, etc.) that most likely won’t be required for the product being developed, they are simply too big and impractical for most real-world applications. For example, if you were developing a smart watch, then a credit-card-sized device isn’t practical at all. In addition, unnecessary functions increase power consumption – which is an important consideration, especially for battery-powered products.

Final Thoughts

Microprocessors have come a long way since the humble Intel 4004. Now capable of controlling everything from small devices such as watches and mobile phones to large computers and even satellites, it’s safe to say that with their low-cost availability, small size, low power consumption, high versatility and high reliability, the microprocessor is one of the most important and significant inventions responsible for powering our modern world.

When developing microprocessor-based commercial products or new technology products for businesses, however, it is essential to design a custom microprocessor board that’s fit for purpose. Though many very capable individuals at organizations can have much success prototyping new products using single-board microcomputer devices such as the Raspberry Pi, when it comes to large-scale production, the project will eventually have to be migrated to a production design. As such, companies currently prototyping new products will find working with third parties that have the resources and expertise to take their microprocessor-based developments and ideas to the next stage invaluable.

Vinnter serves as an enabler for developing new business and service strategies for traditional industries, as well as fresh start-ups. We help companies stay competitive through embedded software development, communications and connectivity, hardware design, cloud services and secure IoT platforms. Our skilled and experienced teams of developers, engineers and business consultants will help you redefine your organization for the digital age, creating new, highly-secure connected products and digital services that meet the evolving demands of your customers. Get in touch to find out more.

The Value of Robotic Automation Lies in Reskilling the Workforce

The first robot entered the workplace in 1959, at an automotive die casting plant in Trenton New Jersey. Since then, many other industries such as electrical/electronics, rubber and plastics, pharma, cosmetics, food and beverage, and the metal and machinery industry have accelerated their adoption of robotic automation. By 2017, there were over two million operational robots across the world, with the number projected to almost double to 3.8 million units by 2021. Today, global robot density (number of robot units per 10,000 employees) in manufacturing industries stands at 74, up from 66 units in 2015. 

(Image source:

The Economic Impact of Robots

A June 2019 How Robots Change The World report from Oxford Economics estimates that a 1% increase in stock of robots could boost output per worker by 0.1% across the entire workforce. The study also projects a 5.3% increase in global GDP, equivalent to about $5 trillion, if robot installations were increased 30% above the baseline forecasts for 2030. 

There have been several studies over the years that have established the superlative impact of robotic automation on productivity, competitiveness and economic growth. There are also studied arguments about how robotic automation enables businesses to reshore jobs, increases demand for higher-skilled workers, addresses rising labor scarcity, and creates new job opportunities that do not even exist today.  

The Social Impact of Robots

But all that opportunity is not without its challenges. For instance, the Oxford Economics study found that, on average, each newly installed robot displaces 1.6 manufacturing workers. This means that up to 20 million manufacturing jobs could be at risk of displacement by 2030. 

It is also necessary to acknowledge that robots are no longer a purely manufacturing phenomenon. Though the automotive sector pioneered and continues to pursue the deployment of robots, today, many other manufacturing sectors, including electrical/electronics, rubber and plastics, pharmaceutical and cosmetics, food and beverage, metal and machinery etc., are investing heavily in robotic automation. And the same is true outside of manufacturing, with retail and ecommerce, sales and marketing, customer service, IT and cybersecurity, and many more sectors and segments besides all deploying robotic automation and artificial intelligence (AI) software to enhance business intelligence and customer experiences. The market for professional services robots is also expected to grow at an average rate of 20-25% between 2018 and 2020. The entire field of robotics is advancing much faster today thanks to falling sensor prices, open source development, rapid prototyping and digital era constructs such as Robotics-as-a-Service and AI. 

The Long and the Short of It Is…

(Image source:

… The robots are coming in practically every industry. But should we really all be fearful for our jobs? It is indeed the widely-held view. According to McKinsey, there’s widespread belief around the world that robots and computers will do much of the work currently done by humans in the next 50 years. 

Research from the World Economic Forum (WEF) agrees that millions of jobs are likely to be displaced by automation, but we have less to fear from robots than many seem to think – at least in the short term. Though the Swiss think tank predicts that robots will displace 75 million jobs globally by 2022, 133 million new ones will be created – a net positive. 

The report notes four specific technological advances as the key drivers of change over the coming years. These are: ubiquitous high-speed mobile internet, artificial intelligence, widespread adoption of big data analytics, and cloud technology. “By 2022, according to the stated investment intentions of companies surveyed for this report, 85% of respondents are likely or very likely to have expanded their adoption of user and entity big data analytics,” write the report’s authors. “Similarly, large proportions of companies are likely or very likely to have expanded their adoption of technologies such as the Internet of Things and app- and web- enabled markets, and to make extensive use of cloud computing. Machine learning and augmented and virtual reality are poised to likewise receive considerable business investment.”

The Reskilling Revolution 

WEF finds that nearly 50% of companies expect that automation will lead to some reduction in their full-time workforce by 2022, based on the job profiles of their employee base today. However, nearly a quarter expect automation to lead to the creation of new roles in the enterprise, and 38% of the businesses surveyed expect to extend their workforce to new productivity-enhancing roles. And this, indeed, is key to the robotic revolution and why so many companies are committed to investing in new technologies in the first place – because robotics, automation, machine learning, cloud computing, and big data analytics can enhance the productivity of the current workforce in the new digital economy and improve business performance.

In industries like manufacturing, these technologies provide seamless connections across production and distribution chains, streamlining the process of getting products from the assembly line into the hands of the customer. But it’s not just manufacturing – everything from healthcare to retail will benefit from these emerging and maturing technologies. And it’s not necessarily the case that robots and algorithms will replace the current workforce, either – rather, WEF says, they will “vastly improve” the productivity of existing jobs and lead to many new ones in the coming years. 

In the near future, it is expected that workers will be doing less physical work (as more and more of it is handled by robots), but also less information collecting and data processing (as these, too, will be automated), freeing up workers for new tasks. Though there will be more automatic real-time data feeds and data monitoring that won’t require workers to enter and analyze it, there will also be more work on the other end of spectrum, where real humans spend time making decisions based on the data collected, managing others, and applying expertise. Indeed, automation is more likely to augment the human workforce than replace it. 

The ability to digitize information and data is stimulating complete redesigns of end-to-end processes, customer experience strategies, and creating more efficient operations. Data analytics, indeed, is a key part of realizing the potential of all next generation technology – including robotics and automation – to enable better real-time reaction to trends and what customers want. 

Though there will inevitably be a decline in some roles as certain tasks within them become automated or redundant, in their place emerges a demand for new roles – though this does mean that the existing workforce will need to be retrained to update their skills. WEF says that among the range of roles that are set to experience increasing demand are software and applications developers, data analysts and scientists, and ecommerce and social media specialists – roles, the authors say, that are significantly based on and enhanced by the use of technology. Also expected to grow are roles that leverage distinctively “human” skills – those in customer service, sales and marketing, training and development, people and culture, organizational development, and innovation management. There will also be accelerating demand for wholly new specialist roles related to understanding and leveraging the latest emerging technologies – AI and machine learning specialists, big data specialists, process automation experts, security analysts, user experience and human-machine interaction designers, and robotics engineers.   

In short, the robotics revolution will spur a reskilling revolution – and businesses already seem to be on board with this idea. 66% of respondents in a McKinsey study assigned top-ten priority to addressing automation/digitization-related skill gaps. 

(Image source:

Final Thoughts 

As we head towards 2020, robotics, automation and related technologies are becoming a prerequisite for any company that wishes to remain competitive. Businesses large and small are embracing automation technologies – from fully-fledged assembly line robots to customized call center chatbots – to help simplify business processes, improve productivity and deliver better customer experiences at scale. This trend is only going to accelerate in the future, and though the rise of the robots may be a cause of concern for many in the labor market, the reality is that organizations won’t be able to solve all of their problems with automation alone. Rather, the robots are coming to augment the human workforce, not replace it. Job roles may change, and new skills may be required, but unless companies of all sizes start automating their processes, they will soon find themselves gobbled up by those that do.  

Vinnter serves as an enabler for developing new business and service strategies for traditional industries, as well as fresh start-ups. We help companies stay competitive through embedded software development, communications and connectivity, hardware design, cloud services and IoT platforms. Our skilled and experienced teams of developers, engineers and business consultants will help you redefine your organization for the digital age, and create new connected products and digital services that meet the evolving demands of your customers. Get in touch to find out more. 

Using design thinking when developing IoT solutions

A McKinsey analysis of over 150 use cases estimated that IoT could have an annual economic impact in the $3.9 trillion to $11.1 trillion range by 2025. At the top end, that’s a value equivalent to 11% of the global economy. We believe using design thinking when developing IoT solutions will help us reach financial targets.


It is not that difficult to envisage that degree of value-add given the continually evolving technical and strategic potential of IoT; there are new standards, components, platforms, protocols, etc. emerging almost daily. It is now possible to combine these options in seemingly endless ways, to address different use case requirements of connectivity, bandwidth, power consumption, user interaction, etc. to suit almost every potential application and user need out there. 

There will, however, be several technical, regulatory and human resources challenges that will have to be addressed before we can extract the real value of IoT. But perhaps the biggest challenge will lie in the approach that IoT companies take to identifying user needs and developing solutions that represent real value. 

Every technology cycle, from the dot com boom to the current AI gold rush, produces its own set of quirky, weird and downright pointless applications. And IoT is no different, with many products boasting connectivity features that may qualify them for a “smart” tag but offer no real benefits whatsoever. Every premier industry event like CES is followed by a slew of news roundups describing bewilderingly absurd “smart” solutions, from smart dental floss to $8,000 voice-activated toilets. 

But we believe that IoT’s true potential and value will only emerge when the focus is squarely on leveraging the power of IoT to address what are known as wicked problems. 

The concept of wicked problems was first defined by design theorist Horst Rittel in the context of social planning in the mid 1960s. It refers to complex issues, characterized by multiple interdependent variables and disparate perspectives, which seem impossible to solve. These problems do not necessarily lend themselves to traditional linear problem-solving processes and methodologies and require a new approach that can handle the inherent ambiguity and complexity of these issues. It was design theorist and academic Richard Buchanan who, in 1992, referenced design thinking as the innovation required to tackle wicked problems. 

Notwithstanding smart litter boxes that can text and smart garbage cans that automate shopping lists, the focal point of IoT has to be on identifying and addressing intractable problems and design thinking is the approach that will enable the IoT industry to do just that.  

Design thinking – A brief history

For many in the industry, design thinking is almost inextricably linked to Tim Brown and IDEO, and both played an important role in mainstreaming both the term and the practice. But as IDEO helpfully clarifies on its website, though they are often credited with inventing the term, design thinking has roots in a global conversation that has been unfolding for decades.

To understand how that conversation unfolded, we turn to Nigel Cross, Emeritus Professor of Design Studies at The Open University, UK, and his 2001 paper Designerly Ways Of Knowing: Design Discipline Versus Design Science. The paper traces the roots of what would eventually evolve into design thinking to the 1920s, and the first modern design movement. According to Cross, the aspiration was to “scientise” design and produce works of art and design that adhered to key scientific values such as objectivity and rationality. 

These aspirations surfaced again, in the 1960s, but the focus had evolved considerably. If formerly the emphasis was on scientific design products, the design methods movement of the 60s focused on the scientific design process and design methodology emerged as a valid subject of inquiry. The decade was capped by cognitive scientist and Nobel Prize laureate Herbert Simon’s 1969 book, Sciences of the Artificial, which refers to techniques such as rapid prototyping and testing through observation that are part of the design thinking process today.  

This “design science decade” laid the groundwork for experts from various fields to examine their own design processes and contribute ideas that would move the aspiration to scientise design along.  IDEO came along in the early 90s with a design process, modeled on the work developed at the Stanford Design School, that even non-designers could wrap their head around, thus providing the impetus to take design thinking mainstream. By 2005, Stanford had launched its own course on design thinking. Today, there are several leading educational institutions offering design thinking courses and a whole range of non-design businesses that rely on design thinking to resolve some of their wickedest problems. 

So, what is design thinking?  

Let’s start with a slice of history again. 

In the 80s, Bryan Lawson, professor at the School of Architecture of the University of Sheffield, United Kingdom, conducted an empirical study to understand how the approach to problem-solving varies between scientists and designers. The study revealed that scientists used problem-focused strategies as opposed to designers who employed solution-focused strategies. Scientists solve by analysis whereas designers solve by synthesis. 

A problem-focused approach relies on identifying and defining all parameters of a problem in order to create a solution. Solution-focused thinking, on the other hand, starts with a goal, say an improved future result, rather than focusing only on resolving the problem. 

Design thinking is a solution-focused methodology that enables the creative resolution of problems and creation of solutions, with the intent of an improved future result. It’s an approach that values analysis as well as synthesis. It is an integrated cognitive approach that combines divergent thinking, the art of creating choices, with convergent thinking, the science of making choices. Design thinking provides non-designers with elements from the designer’s toolkit that allows them to take a solution-focused approach to problem-solving. 


IDEO’s definition of design thinking as a human-centered approach also includes what is often referred to as the three lenses of innovation; desirability, feasibility and viability. Human-centred design always begins by establishing desirability, defining what people want. The next stage is to establish if it is technically feasible to deliver what people want. And finally, even a desired and technically feasible solution must be commercially viable for a business. Design thinking, then, is a process that delivers innovative solutions that are optimally positioned at the overlap between desirability, feasibility and viability.  

This framework should be the ideal starting point for product development in the IoT industry. Today, a lot of solutions seem to take desirability and viability for granted just because it is technically feasible to embed almost anything with connectivity. But is this the right approach to IoT innovation?  

The 5-stage design thinking model 

The design thinking process guide from the Hasso-Plattner Institute of Design at Stanford ( prescribes a 5-stage model that progresses as follows:


EMPATHIZE: Empathy is a critical component of the human-centred design process as it rarely if ever begins with preconceived ideas, assumptions and hypotheses. This stage allows enterprise teams to better understand the people that they are designing for; understand their needs, values, belief systems and their lived experience. As the process guide puts it, the best solutions come out of the best insights into human behavior. Design thinking encourages practitioners to observe how people interact with their environment in the context of the design challenge at hand. Designers should also directly engage with end users, not in the form of a structured interview but as a loosely bounded conversation. Both these approaches can throw up insights that may not necessarily be captured by historical data or expert opinions. 

DEFINE: This stage of more about defining the design challenge from the perspective of collected end user insights rather need defining a solution. The “define’ stage enables the synthesis of vast amounts of data, collected in the previous stage, into insights that can help focus the design challenge. At the end of this stage, it must be possible to articulate an actionable problem statement that will inform the rest of the process. 

IDEATE: The purpose of ideation is not to hone in on a right idea but generate the broadest range of possible ideas that are relevant to the design challenge. Finding the right idea will happen in the user testing and feedback stage. In the meantime, use as many ideation techniques as possible to move beyond the obvious into the potentially innovative. Most important of all, defer judgement as evaluating ideas as they flow can curb imagination, creativity and intuition. At the end of the ideation process, define quality voting criteria to move multiple ideas into the prototyping stage. 

PROTOTYPE: Build low-resolution (cheap and quick) prototypes as it means that more prospective ideas can be tested. Use these prototypes to elicit feedback from users and the team that can then be looped back into refining these solutions across multiple iterations. A productive prototype is one that communicates the concept of the proposed solution, stimulates conversation and allows for the quick and cheap failure of unworkable ideas. 

TEST: Prototyping and testing often work as two halves of the same phase rather than as two distinct phases. In fact, the prototype design will have to reflect the key elements that must be tested and even how they will have to be tested. Testing need not necessarily focus only on users’ feedback to the presented prototype. In fact, this stage can sometimes generate new insights as people interact with the prototype. Rather than telling users how to use the prototype, allow them to interact freely and compare different prototypes. 

And finally there is iterate. This is not so much a stage as a golden rule of design thinking. The point of design thinking is to create a repetitive learning loop that allows teams to refine and refocus ideas or even change directions entirely. 

Of course, the Stanford model is not the only design thinking framework in circulation today. Those interested in more options can find an introductory compilation at 10 Models for Design Thinking. Though these frameworks may vary in nomenclature and process structure, some central design thinking concepts such as empathy and iteration remain common to most.

Is design thinking effective? 

According to one source, only 24% of design thinking users measure the impact of their programs. Even a survey from Stanford’s found that organizations struggled to determine ROI.  

However, in an excellent article in the Harvard Business Review, Jeanne Liedtka, professor of business administration at the University of Virginia’s Darden School of Business, concludes, after a seven-year 50-project cross-sectoral qualitative study that “design thinking has the potential to do for innovation exactly what TQM did for manufacturing: unleash people’s full creative energies, win their commitment and radically improve processes.

A more quantitative study by Forrester on The Total Economic Impact Of IBM’s Design Thinking Practice provides a litany of quantified benefits that includes the realization of $20.6 million in total value due to a design thinking-led reduction in design, development and maintenance costs.  

But the limited availability of quantitative data has been offset by the steady stream of success stories of world-leading companies transforming elements of their business with design thinking. 

Design thinking offers the framework that, at a fundamental level, will enable the IoT industry to reorient itself away from a “what can I connect next to the internet” mindset to a “where do users need help the most” approach. Its human-centric empathy-driven approach enables businesses to identify and understand potential contexts and problems from the perspective of the end-user rather than from the point of view of the possibilities afforded by technology. Companies can now use the three lenses of innovation to evaluate the practical, technical and commercial value of the solutions that they plan to deploy. And finally, the inclusive and iterative design process will ensure a much higher probability of success while enabling real value for customers. 

Access Control & Iot Security: Challenges And Opportunities

IoT, the new attack vector

IoT attacks increased by over 217% in 2018. But a report with the provocative title of IoT CyberattacksAre The Norm, The Security Mindset Isn’t found that only 7% of organizations consider themselves equipped to tackle IoT security challenges. If that sounds wanting, consider this: 82% of organizations that develop IoT devices are concerned that the devices are not adequately secured from a cyberattack. Another study found that only 43% of enterprise IoT implementations prioritize security during the development/deployment process and only 38% involve security decision-makers in the process. Access control is considered being the first line of defence when it comes to IoT security.

Now, those broad trend indicators can possibly apply to any nascent technology. But there are two factors that make the IoT scenario particularly precarious. The first is the fact that, by all indications, the IoT is emerging as a potentially preferred attack vector for launching botnet assaults or even infiltrating enterprise networks. The second is that thus far, the IoT industry, from device developers to enterprise IT organizations, seems oblivious or ill-equipped to even secure access control and authentication, one of the fundamental components of any technology security strategy. 

Key IoT security challenges

However, an objective analysis of the scenario cannot but mention some of the unique characteristics of IoT networks that make security much more of a challenge than with other technology environments.  

First off, there’s the attack surface. An estimated 20 billion devices will be connected to the IoT by 2020, that’s 20 billion potential endpoint targets for malicious intent. A lot of these devices will be deployed in areas where it may be impossible or impractical to provide physical security, which makes it easier for bad actors to physically compromise devices on the network. Apart from the physical device, each IoT system comprises multiple edges and tiers including mobile applications, cloud and network interfaces, backend APIs, etc. Each one of these elements represents a potential vulnerability and just one unsecured component can be leveraged to compromise the entire network.  

Second there’s the sheer heterogeneity of IoT networks, with a range of different hardware and software stacks, governed by different access-control frameworks and with varying levels of privileged access. This means that there is no one size-fits-all approach to security and IoT security strategy will have to be designed around the characteristics of participating entities on each network. 

And finally, most IoT devices have limited power, storage, bandwidth and computational capabilities. So conventional security methods that are effective in other computing systems will be too complex to run on these constrained IoT devices. 

Device visibility precedes access control 

It is this distributed nature of IoT, where large volumes of devices communicate autonomously across multiple standards and protocols, that makes security more complex than it is in other more monolithic computing environments. That’s also why the IoT industry will need to reimagine conventional access control and authentication models and protocols and purpose them for this new paradigm. The right access control and authentication frameworks enables companies to identify IoT devices, isolate compromised nodes, ensure the integrity of data, and authenticate users and authorize different levels of data access. 

Since access control is the first point of contact between a device and the IoT network, these technologies must be able to recognize these devices in order to determine the next course of action. IoT devices have to be visible before access control and authentication can kick in and do its job. But most enterprises currently do not fare very well on the IoT device visibility score; a mere 5% keep an inventory of all managed IoT devices and only 8% have the capability to scan for IoT devices in real-time. But 46% are making it a priority in 2019 to enhance IoT discovery, isolation and access control, and that provides the starting point for a discussion on the merits of the different access control models available today. 

There are several types of access control models that can be considered for different IoT scenarios; from the basic ACL (Access Control List) model to the slightly more advanced MAC (Mandatory Access Control) model used primarily in military applications to the still-evolving and sophisticated Trust Attribute-Based Access Control model that builds on the ABAC (Attribute-Based Access Control) model to address requirement specific to IoT. 

Types of access control and authentication models 

But for the purposes of this article, we shall focus on more mainstream models that include RBAC (Role-Based Access Control), ABAC, CapBAC (Capability-Based Access Control) and UCON (Usage Control) model. 

RBAC: As the name suggests, this model manages resource access based on a hierarchy of permissions and rights assigned to specific roles. It allows multiple users to be grouped into roles that need access to the same resources. This approach can be useful in terms of limiting the number of access policies but may not be suitable for complex and dynamic IoT scenarios.  However, it is possible to extend RBAC to address fine-grained access control requirements of IoT though this could result in “role explosion” and create an administrative nightmare. 

The OrBAC (Organizational-Based Access Control) model was created to address issues related to RBAC and to make it more flexible. This model introduced new abstraction levels and the capability to include different contextual data such as historic, spatial and temporal data. There has also been a more recent evolution along this same trajectory with Smart OrBAC, a model designed for IoT environments that offers context-aware access control. 

ABAC: In this model, the emphasis shifts from roles to attributes on the consideration that access control may not always have to be determined by just identity and roles. Access requests in ABAC are evaluated against a range of attributes that define the user, the resource, the action, the context and the environment. This approach affords more dynamic access control capabilities as user access and the actions they can perform can change in real-time based on changes in the contextual attributes.  

ABAC provides more fine-grained and contextual access control that is more suited for IoT environments than the previous RBAC. It enables administrators to choose the best combination of a range of variables to build a robust and comprehensive set of access rules and policies. In fact, they can apply access control policy even without any prior knowledge of specific subjects by using data points that are more effective at indicating identity. The biggest challenge in this model could be to define a set of attributes that is acceptable across the board. 

CapBAC: Both RBAC and ABAC are models that use a centralized approach for access control, as in all authentication requests are processed by a central authority. Though these models have been applied in IoT-specific scenarios, achieving end-to-end security using a centralized architecture on a distributed system such as the IoT can be quite challenging. 

The CapBAC model is based on a distributed approach where “things” are able to make authorization decisions without having to defer to a centralized authority. This approach accounts for the unique characteristics of the IoT such as large volume of devices and limited device-level resources. Local environmental conditions are also a key consideration driving authorization decisions in this model, thus enabling context-aware access control that is critical to IoT. 

The capability, in this case, refers to a communicable, unforgeable token of authority that uniquely references an object as well as an associated set of access rights or privileges. Any process with the right key is granted the capability to interact with the referenced object as per the defined access rights. The biggest advantage of this model is that distributed devices do not have to manage complex sets of policies or carry out elaborate authentication protocols which makes it ideal for resource constrained IoT devices.

UCON: This an evolution of the traditional RBAC and ABAC models that introduces more flexibility in handling authorizations. In the traditional models, subject and object attributes can be changed either before the authorization request begins or after it is completed, but not when the subject has been granted permission to interact with an object. 

The UCON model introduces the concept of mutable attributes as well as two new decision factors, namely obligations and conditions, to go with authorizations. Mutable attributes are subject, object or contextual features that change their value as a consequence of usage of an object. By enabling continuous policy evaluation even when access is ongoing, UCON makes it possible to intervene as soon as a change in attribute value renders the execution right invalid.


Apart from these mainstream models, there are also several models, such as Extensible Access Control Markup Language (XACML), OAuth, and User-Managed Access (UMA) that are being studied for their applicability to IoT environments. But it is fair to say that the pace of development of IoT-specific access control models is seriously lagging development efforts in other areas such as connectivity options, standards and protocols. 

The other worrying aspect of the situation is that enterprise efforts to address IoT security concerns do not show the same urgency as those driving IoT deployments. All this even after a large scale malware attack in 2016 hijacked over 600,000 IoT devices using just around 60 default device credentials. A robust access control and authentication solution should help thwart an attack of that intensity. But then again, access control is just one component, a critical one nevertheless, of an integrated IoT security strategy. The emphasis has to be on security by design, though hardware, software and application development, rather than as an afterthought. And that has to happen immediately considering that the biggest IoT vulnerability according to the most recent top 10 list from the Open Web Application Security Project is Weak, Guessable, Or Hardcoded Passwords.  

From Smart To Helpful – The Next Generation Connected Home

“No one asked for smartness, for the smart home.” That’s the head of Google’s smart home products explaining the company’s decision to focus on delivering a helpful home that provides actual benefits rather than a smart home that showcases technology. This is key; the next generation connected home must provide convenience and actual benefit.


Smart home, helpful home, what’s in a name when the industry is growing at a CAGR of almost 15% and is expected to more than double in value, from USD 24.10 billion in 2016 to USD 53.45 billion in 2022. Growing acceptance of connected home devices powered global shipments to over 168 million in the first quarter of 2019, up 37.3% from the previous year. IDC estimates that shipments will continue to grow at almost a 15% CAGR, from 840.7 million units in end 2019 to 1.46 billion units by 2023. 


There are a lot of factors fueling the increasing acceptance of connected home devices. A majority of consumers expect their next home to be connected and are willing to pay more for a helpful home. Though this trend may be spearheaded by digital-savvy millennials and their penchant for tech innovations, the convenience afforded by these smart solutions is drawing in the older generations as well. Fairly recent innovations like voice-enabled interfaces are simplifying the adoption process for a larger proportion of consumers. At the same time, increasing competition and falling device prices, rising interest in green homes and sustainable living, have all, to varying degrees, helped convert consumer interest into action. 

But of course, there has to be underlying value to all these trends and preferences. 

Key value drivers in smart home systems

There are broadly three layers of value in a smart home system. The first is the convenience of anytime-anywhere accessibility and control, where consumers can change the state of their devices, such as lock door, turn off lights, etc., even remotely, through a simple voice or app interface. 

The second layer enables consumers to monitor and manage the performance of these systems based on the data they generate. For instance, consumers can manage their energy consumption based on the smart meter data or create a fine-grained zone-based temperature control using smart thermostats to control costs. 

The final layer is automation, which is the logic layer that enables consumers to fine tune and automate the entire system based on their individual needs and preferences. 

Till date, there have been some empirical quantifications of value in terms of how a lot of smart homeowners in the US save 30 minutes a day and $1,180 every year or how smart thermostats can cut temperature control costs by 20%. However, it is possible, at least theoretically, to link adoption to value as smart home segments such as energy and security management with, tangible value propositions of cost savings and safety, have traditionally experienced higher rates of adoption. 

But as the smart home markets evolves beyond the hype and adoption cycle, the dynamics of value are changing. And Google’s pivot from smart to helpful reflects this shift in the connected home market. It is no longer about the technology but about the value it can deliver.    

The future value of smart home technologies

Customers get smart home tech. Most consumers in the US, a key market for this emerging technology, most people already use at least on smart home device. According to one report, US broadband households now own more than 10 connected devices with purchase intention only getting stronger through the years. The global average for smart home devices per household is forecast to be 16.53, up from the current 5.35. 

Along with device density, consumer expectations of the technology are also rising. Almost 80% of consumers in a global study expect a seamless, personalized and unified experience where their house, car, phone and more all talk to each other. They expect emerging technologies like AI to enhance their connected experience. And they expect all this to be delivered without compromising privacy or security. 

There is a similar shift on the supply side of the market too. 

If the emphasis thus far was on getting products into consumers’ homes, the future will be about creating a cohesive experience across all these devices. In this future, services, rather than devices, will determine the value of an IoT vendor. With device margins fading away, the leaders will be determined by their ability to leverage the power of smart home device data to deliver services that represent real value for consumers.  

So a seamless cohesive cross-device experience is what consumers expect and is also what will drive revenue for smart home solution providers. And the first step towards realizing this future will be to address the systemic issue of interoperability in smart homes. 

Interoperability in smart home technologies

Interoperability over brand loyalty, that seems to be the consumer stance according to a report from market research and consulting firm Parks Associates. When it comes it purchasing new devices, more people prioritize interoperability with their current smart home set up over matching brands to their existing products. 


The true smart home is not a loosely connected set of point solutions. It is an integrated ecosystem of smart devices that delivers a seamless and cohesive smart home experience. 

For smart home vendors, interoperability creates the data foundation on which to build and monetize new solutions and services that add value to the consumer experience. Ninety seven percent of respondents to a 2018 online survey of decision-makers in the smart home industry believed that shared data and communication standards would benefit their business. These benefits ranged from the ability to create new solution categories (54%), capture and correlate across richer data sets (43%), focus on core strengths rather than grappling with integration issues (44%) and accelerate adoption (48%).     

There are two fallouts from the limited interoperability standards in the smart home market today. The first is the integration challenges it creates for consumers trying to create a cohesive ecosystem out of an extensive choice set of solutions fragmented by different standards and protocols. 

There are a few ways in which consumers can address this challenge. The rapid rise of smart speakers, the fastest-growing consumer technology in recent times, and voice-enabled interfaces has helped streamline the adoption and simplified integration to a certain degree. The next option is to invest in a dedicated smart home hub, like Insteon Hub and Samsung SmartThings Hub, that ties together and translates various protocol communications from smart home devices. Many of these hubs can now be controlled using Amazon Alexa and Google Assistant voice controls. Universal Control Apps such as IFTTT and Yonomi also enable users to link their devices and define simple rule-based actions with the caveat that they have been integrated by device manufacturers. Many device vendors have also launched “works with” programs to expand compatibility and enable consumers to create a more or less unified smart home solution. 

Though each of these approaches have their merit, collectively they represent an approach to mitigate the symptoms of fragmentation rather than enforce interoperability by design. A shared standard would go a long in addressing the challenges of the current approach to enabling organic interoperability in smart homes. 

OCF and open source, open standard interoperability

OCF (Open Connectivity Foundation) is an industry consortium dedicated to ensuring secure interoperability for consumers and IoT businesses. Its members include tech giants such as Microsoft, Cisco, Intel and appliance majors such as Samsung, LG, Electrolux and Haier.      

For businesses, OCF provides open standard specifications, code and a certification program to enable manufacturers to bring OCF Certified products with broad scale interoperability across operating systems, platforms, transports and vendors. The Foundation’s 1.0 was ratified last year and will soon be published as an ISO/IEC standard. OCF also provides two open source implementations — IoTivity and IoTivity Lite — for manufacturers looking to adopt the ratified standard and maximize interoperability without having to develop for different standards and devices. 

OCF’s latest 2.0 specification introduces several new features including device-to-device connectivity over the cloud, something that was not possible in 1.0 The 2.0 specification will be submitted for ISO/IEC ratification later this year. 

With key partners like Zigbee its now worldwide recognized specification, OCF continues to advance in developing a truly open IoT protocol, equipping developers and manufacturers in the IoT ecosystem with the tools they need to provide a secure, interoperable end user experience.

OCF works with key partners, such as Zigbee, Wi-Fi Alliance, World Wide Web Consortium (W3C), Thread, and Personal Connected Health Alliance (PCHAlliance), and with over 400 members from the industry to create standards that extend interoperability as an operating principle.  

Interoperability, however, is often only the second biggest concern of smart home consumers. The first is security, relating to hacked or hijacked connected home systems, and privacy, relating to how consumer data is collected, utilized and despatched. 

Security & privacy in smart homes

In July this year, there were news reports about a massive smart home breach that exposed two billion consumer records. This was not the result of any sophisticated or coordinated attack and more the consequence of one misconfigured Internet-facing database without a password. It was a similar situation with Mirai attack of 2016 where consumer IoT devices such as home routers, air-quality monitors and personal surveillance cameras were hijacked to launch one of the biggest DDoS attacks ever. Then too, there was no sophistication involved. The attackers simply used 60 commonly used default device credentials to infect over 600,000 devices.  

IoT, including consumer IoT, offers some unique challenges when it comes to security. But the security mindset has yet to catch up with the immensity of the challenge. 

It’s a similar situation when it comes to privacy. Globally, most consumers find the data collection process creepy, do not trust companies to handle and protect their personal information responsibly and are significantly concerned about the way personal data is used without their permission. 

The situation may just be set to change as the first standards for consumer IoT security start to roll in. 

Earlier this year, ETSI, a European standards organization, released a globally applicable standard for Consumer IoT security that defines a security baseline for internet-connected consumer products and provide a basis for future IoT certification schemes. The new standard specifies several high-level provisions that include a pointed rejection of default passwords. The ETSI specification also mandates a vulnerability disclosure policy that would allow security researchers and others to report security issues.

Security is an issue of consumer trust, not of compliance. The smart home industry has to take the lead on ensuring the security of connected homes by adopting a “secure by design” principle. 

Emerging opportunities in smart homes

As mentioned earlier, consumers really expect their smart home experience to flow through to their outdoor routines, their automobiles and their entire daily schedules. Smart home devices will be expected to take on more complex consumer workloads, like health applications for instance, and AI will play a significant role in making this happen. AI will also open up the next generation of automation possibilities for consumers and play a central role in ensuring the security of smart home networks.

Data will play a central role in delivering a unified, personalized and whole-home IoT experience for consumers. Companies with the capability to take cross-device data and convert it into insight and monetizable services will be able to open up new revenue opportunities. However, these emerging data-led opportunities will come with additional scrutiny on a company’s data privacy and security credentials.