Robotic Process Automation: What Is It, and What It Brings

Whenever the term Robotic Process Automation (RPA) was mentioned, it is not hard to conjure images of cold, mechanical machines doing physical labour and replacing jobs in rendering human workers redundant. However, such perception could not be further from the truth, not just because of how the word “robotic” can be misleading, but also the lack in understanding what RPA is beyond the headlines.

The Subject

So what is RPA? Unlike many other topics discussed on this site here, there is one specific, official definition published by a governing authority (in this case, a diverse panel of industry participants). According to the IEEE Guide for Terms and Concepts in Intelligent Process Automation published by the IEEE Standards Association, RPA is defined as a “preconfigured software instance that uses business rules and predefined activity choreography to complete the autonomous execution of a combination of processes, activities, transactions, and tasks in one or more unrelated software systems to deliver a result or service with human exception management”.

Now, often the problem about standard definitions is that the meaning can often be lost in a sea of words. One site has actually cited this definition, and has to include a simpler analogy: software robots that mimics human action.

This, however, should not be confused with Artificial Intelligence (AI), which the same site likened to human intelligence being simulated by machines. In fact, RPA is illustrated in a lower rank than AI in a doing-thinking continuum, where RPA is more process-driven whereas AI is data-driven.

A doing-thinking continuum, with robotic process automation being in the middle-left under process driven, and artificial intelligence on the far right under data-driven.

So how does RPA works? Several sites have pointed out that RPA existed as an evolution from several technologies. Notably, the most cited technology in which RPA was evolved from is screen scraping, which is the collection of data displayed on screen usually from a legacy application to a more modern interface. Another technology cited is (traditional) workflow automation (or in this case, where a list of actions were programmed into the software to automate tasks while interacting with back-end systems through application programming interfaces (APIs) or scripting languages.

RPA, being evolved from those technologies, develops the list of actions through monitoring users performing the task in the Graphical User Interface (GUI) and then perform the automation through repeating the tasks on the GUI. Furthermore, RPA does not require a physical screen to operate as the actions would need to take place in a virtual environment.

The Pros and Cons

It’s not too hard to look at the continuum above (also called as the “Intelligent Automation Continuum”, albeit a simpler one) and relate the benefits and risks to that which have been discussed, such as Machine Learning and Artificial Intelligence. However, seeing RPA is more process-driven rather than data-driven, there would be difference in the benefits as well.

Multiple sources cited the benefit of achieving greater efficiency, as RPA is able to conduct repetitive tasks quickly around-the-clock with minimal error. With such efficiency, organisations that uses RPA may reap the benefits of cost savings from staffing, since such tasks no longer require the same number of staffing.

Some sites were more subtle on the message of reduced staffing, by pointing out that RPA may free up staff from monotonous and repetitive tasks to conduct more productive and high-value tasks that require creativity and decision making, or exploring the opportunity for people to be re-skilled and obtain new jobs in the new economy.

But just like the many other topics discussed on this site, human worker redundancy is the pink elephant in the room. According to estimates from Forrester Research, RPA software could displace 230 million or more knowledge workers, which is about 9 percent of the global workforce. Furthermore, in some cases, re-skilling displaced workers may not be within the organisational users’ consideration, since there may not be as many new jobs available for these displaced workers, not to mention that re-skilling may negate the cost saving benefits achieved. With that said, currently many organisations have already resorted to Business Process Outsourcing (BPO) for many current tasks which RPA is suited to deploy on, and hence displacement may be more serious in BPO firms.

Another benefit of RPA cited by certain sites is how RPA can be used without the need for huge customisation to systems and infrastructure. Since RPAs are generally GUI-based, they do not require deep integration with systems or alterations of the infrastructure, and are supposedly easy to implement. In fact, automation efforts can be boosted by combining RPA with other cognitive technologies such as Machine Learning and Natural Learning Processing.

That being said, RPA’s dependency on systems’ user interfaces carries a risk from obsolescence. RPA interacts with the user interface exactly as how it monitors/programmed to do, and when there are changes to the interface, the RPA would break down. And remember, RPA is also reliant to the exactness of data structures and sources, rendering RPA rather inflexible. This inflexibility is a stark contrast to how easily humans can adjust behaviour to changes as they arise.

Then there are APIs. Modern applications usually have APIs which are a more “resilient approach” in interacting with back-end systems to automate processes, relative to the brittleness RPAs had to face from the limitation described earlier. Furthermore, APIs may be seen as a more favourable option in an end-to-end straight through processing ecosystem involving multiple operating systems and environment.

The Takeaway

There are many use cases for RPA these days, that it is not exactly a new topic. Plus, with the criticism of dependency on features which may change or become obsolete, RPAs many not seem as alluring these days. In fact, some rule of thumb is to consider whether the processes could be processed straight-through with existing capabilities, before resorting to RPA.

Organisations should identify tasks that RPA may be applied and remain relevant in years to come before making the decision. Others would advise a more broad-based approach in investing automation – to consider the whole continuum instead of expecting RPA as the silver bullet to operational efficiency.

As for the redundancy problem, it has been the recurring theme in this age of digitalisation. Reiterate several posts written here, the society as a whole needs to confront with such issues and answer grave, philosophical questions concerning human jobs and roles in the future. It is an essential discourse to take place, in which is not happening enough with due significance unfortunately. And if we were to take reference from history, not doing much is simply equivalent to a Luddite approach.

Fourth Industrial Revolution vs. Industry 4.0: Same but different?

It is the new year of 2019, and initially, for the first post of the year, I would like to write about a key concept which would be an underlying tone for the year(s) ahead, and the backdrop for some of the topics I may be featuring in this blog in this year (besides having already featured some topics in 2018). The terms “Fourth Industrial Revolution” and “Industry 4.0” have become buzzwords of the century, and gone out from being mere jargons used by management consultants to widespread use in various industries.

But then as I realised, even though we have heard these two terms “Fourth Industrial Revolution” and “Industry 4.0” being interchangeably used, the equivalence might not be as strong as we may perceive.

With that, let’s get right into what entails within these terminologies.

Fourth Industrial Revolution

The term “Fourth Industrial Revolution” (4IR) was first coined by World Economic Forum founder and executive chairman Klaus Schwab in 2015, when he compared technological progress with the industrial revolutions previously happened.

Source: https://online-journals.org/index.php/i-jim/article/viewFile/7072/4532

Of course, there are various definitions and descriptions on the 4IR, but they all point to an industrial transformation “characterized by a range of new technologies that are fusing the physical, digital and biological worlds, impacting all disciplines, economies and industries, and even challenging ideas about what it means to be human” as the World Economic Forum points out.

Schwab has highlighted that the 4IR is not a mere extension of the Third Industrial Revolution that spanned from the 1960s to the end of the 20th century, as it was different in velocity, scope and systems impact. He pointed that the speed of technological progress under 4IR would be exponential rather than linear, that the scope is wider with every industry being affected, and that it would be impacting systems of production, management and governance in a transformative way.

Among the technologies that were cited under 4IR are “artificial intelligence, robotics, the Internet of Things, autonomous vehicles, 3-D printing, nanotechnology, biotechnology, materials science, energy storage, and quantum computing “.

Schwab has also identified the opportunities and challenges that underline 4IR. 4IR may bring about an improvement in global income levels and quality of life for people across the world through greater accessibility to affordable digital services, and the efficiencies brought by these services. Relating to the point of efficiency, businesses may stand to benefit from more effective communication and connectivity through technological innovation under 4IR.

However, 4IR presents the concerns on widening existing inequality. Given that automation and digitalisation can, if not already have, substitute and displace workers, this might “exacerbate the gap between returns to capital and return to labour”. In other words, low-skilled workers which generally come from the poorer segment of the society would increasingly face scarce job opportunities, while the owners of capital (in this case, the automation, robotic, digital systems) – mainly innovators, shareholders and investors – would be exemplifying the colloquial term of “the rich gets richer”. Considering how much of the anxieties and discontent in this current age is very much fueled by inequality, perceived or otherwise, the problem of growing inequality is certainly a growing problem.

Nonetheless, Schwab reminded that all of us are responsible for guiding this evolution, and that 4IR can be a complement to the “best of human nature”, bringing the human race to a new level of moral consciousness based on a shared sense of destiny.

Industry 4.0

So how is Industry 4.0 different from 4IR? To begin with, the source of where the term comes from is different in time and place.

The term Industry 4.0 found its origins from Germany’s Industrie 4.0, which is a high-tech strategy by the German government to promote technological innovation in product and process technologies within the manufacturing industry. The Malaysian Ministry of International Trade and Industry defines Industry 4.0 as “production or manufacturing based industries digitalisation transformation, driven by connected technologies”.

At the core of Industry 4.0 lies several foundational design principles:

  1. interconnection between people, systems and devices through Internet of Things;
  2. information transparency to provide users with plenty useful information to make proper decisions;
  3. technical assistance to support aggregation and visualisation of information for real-time decision making and problem solving, as well as to help users conduct unfeasible tasks;
  4. decentralised decisions made autonomously.

From the design principles, it entails that Industry 4.0 heavily involves specific types of technology, mainly those involved in achieving inter-connectivity and automation. Cleverism cited a research which identified the main types of technology involved under Industry 4.0, and outlined four main components: Cyber-Physical Systems, the Internet of Things, the Internet of Services and Smart Factory.

Given Internet of Things have been widely discussed (even on this site), let’s have a brief look at what the other terms entail:

  • Cyber-Physical Systems aim at integrating computation and physical processes, where physical processes are able to be monitored by devices over a network. Developing such systems would involve unique identification of objects throughout processes, the development of sensors and actuators for exchange of information, and the integration of such sensors and actuators.
  • The Internet of Services looks at how connected devices (under Internet of Things) can become an avenue of value creation (and revenue generation) for manufacturers.
  • A smart factory is a manufacturing plant that puts the aforementioned concepts together, by adopting a system that is aware of the surrounding environment and the objects within it. As the research paper mentioned, “the Smart Factory can be defined as a factory where Cyber-Physical Systems communicate over the Internet of Things and assist people and machines in the execution of their tasks”.

The benefits and challenges of Industry 4.0 would be similar to that of 4IR, with some having more specific focus on how it would impact the manufacturing industry.

One of such benefits is allowing manufacturers to offer customisation to customers. Industry 4.0 would empower manufacturers with the flexibility in responding to customer needs through inter-connectivity, Internet of Things and Internet of Services. Having the connectivity with consumer devices through Internet of Things, manufacturers would have access to consumer behaviours and needs in a seamless manner, and therefore has the potential to cater to unique consumer demands quicker than conventional go-to-market methods.

On the flip side, one greatest challenge Industry 4.0 would face is security and privacy. With great powers of connectivity comes great responsibility in ensuring data transmitted across these connections are protected. Similar to the challenges of security discussed in the Internet of Things article, the challenges of security also apply to manufacturers, and ever more so considering how processing methods are trade secrets in most manufacturing industries. On the other hand, as manufacturers increasingly assume the role as collectors of consumer data under Industry 4.0, this would increase consumers’ concern on how their data might be handled and used.

Still, despite the challenges, the future is bright for Industry 4.0 due to the promises of process efficiencies it bring, which is why the Malaysian government is seemingly convinced and committed to this technological trend, and acknowledged the need for industries to transform accordingly through its 2019 Fiscal Budget allocation for SMEs adopting automation technology.

Final Thoughts

And in conclusion, even though both terms of “Fourth Industrial Revolution” and “Industry 4.0” might have been used interchangeably, it is rather clear that these two terms have different focus. Some suggested that Industry 4.0 is a more relevant discussion on technological progress rather than the concept of 4IR, while others said that Industry 4.0 can be considered as a subset under the Fourth Industrial Revolution.

At the end of the day though, it is upon us to figure out how we should envision the future ahead with 4IR and Industry 4.0 by resolving pertinent issues surrounding personal data ethics and the future of workforce.

Featured Image credit: Christoph Roser at AllAboutLean.com

Quick Take: Tech in 2018/2019 – The Year That Was, And Is To Come

We have come to the end of the year 2018, with the new year of 2019 coming chiming in moments away. I thought it would be a great idea to take a look at tech in the past year, and stuff we could anticipate in the coming year.

But first, let’s look at what failed in the year that was.

Tech Fails in 2018

Source: 
https://www.digitalinformationworld.com/2018/12/roundup-of-2018-biggest-tech-failures.html
https://www.zdnet.com/article/the-worst-tech-failures-of-2018/
https://www.questionpro.com/blog/rise-and-fall-of-tech-innovation-2018-technology-failures/

As this site does not discuss on tech gadgets, I will be focusing on the concepts, ideas and principles in this post.

The year 2018 has shown to be a bad year for privacy, where the world has experienced a massive security breach at Facebook and Google+, with the latter to be shut down in 2019. Related to privacy as well was the hacking of Bitfi, an electronic wallet for cryptocurrency.

Speaking of cryptocurrency, it was a downhill slide throughout most of 2018 for cryptocurrencies across the board, with Bitcoin seeing a price decline of ~73% since the start of 2018. Also plaguing 2018 was downtime issues with cloud computing providers, as well as Google’s alleged involvements in creating a censored search engine for China and a warfare system using AI.

Some, having observed little progress made in 2018, would classify Internet of Things, Big Data and Virtual Reality technologies as those that did not live up to its hype in 2018.

Still, if you asked me, the real tech fails for 2018 may really arise from the U.S. Congress hearing sessions on Facebook’s Mark Zuckerberg and Google’s Sundar Pichai.

Now that we have somewhat summarised the fails, let’s move on to the slightly positive side of things.

Tech Wins in 2018

Source:
https://www.cnet.com/news/the-top-tech-stories-of-2018/
https://www.zdnet.com/article/2018-technology-trends-thatll-matter-a-decade-from-now/
https://www.recode.net/2018/12/13/18106455/best-of-2018-data-charts-tech-end-year-list-amazon-facebook-juul-moviepass-elon-musk

The following may not be actually tech wins, but it is deemed as a positive breakthrough (the word “wins” serves to juxtapose the word “fails”).

It is in 2018 that we saw Google introducing its Duplex artificial intelligence software which could make reservations and appointments over the phone while emulating speech nuances of a human person, indicating the progress in not merely natural languge processing but even to create natural language-based content. This is indeed a significant progress made in the artificial intelligence field.

We saw that cloud computing had a great year: Amazon Web Services’ partnership with VMware has gained steam and encouraged the former to lay out greater ambitions, Microsoft Azure commercial cloud services is looking at hitting $34 billion annual revenue based on its current Q1 run rate, and IBM has decided to throw its hat in the ring and raise the game with its acquisition of Red Hat.

On a greener note, electric vehicles were selling like hot cakes, with the U.S. seeing its 11-month vehicles sales higher by 57% from full-year 2017 – on the back of President Trump’s steel and import tariff. Electric scooters were also flourishing (although it did garner some hate as well).

From a more general perspective, tech companies are investing more than ever before in 2018 with record capital expenditures. The biggest companies were making various acquisitions from real estate to data centres to keep up with customer demand apart from staying competitive. And of course, it all remains to be seen what can come out of greater amount of investments, but it is an encouraging sign for consumers and the industry as a whole.

Tech Hopes in 2019

Source:
https://www.thestar.com.my/tech/tech-news/2018/12/24/unfolding-future-innovation/
https://www.forbes.com/sites/steveandriole/2018/10/22/gartners-10-technology-trends-for-2019-the-good-the-obvious-and-the-missing/

Now, I will gloss over some of the technology mentioned above which will continue its trajectory in the coming year, such as IoT and artificial intelligence. Instead, I will highlight a few interesting trends to look out for in 2019.

Right off the bat is the expected rollout of 5G network connectivity, which is expected to improve current data speeds from 4G LTE connections. This would become a catalyst in expanding IoT technology, especially in autonomous vehicles technology. In 2018, 5G connectivity trials were being conducted both abroad (e.g. Frankfurt, San Francisco) and within Malaysia (Cyberjaya and Putrajaya).

As a result of 5G connectivity, augmented reality might become a thing, having somewhat disappointed in 2018. During a tech conference in September 2018, Vodafone demonstrated the possibility of 3D holographic calls on a 5G network – of course, this is probably a gimmick unfeasible to be replicated by ordinary laymen to illustrate how much data can be streamed over 5G, but this certainly opens up opportunities on how immersive experience can be introduced once mobile data speeds are greatly improved. Other updates in the AR space includes Facebook’s announcement to add body and hand tracking features into its AR developing tools.

Augmented analytics could also see significant progress with advancements in AI and big data. For those unsure what “augmented analytics” meant, it can be understood as an approach to “automate insight generation in a company through the use of advanced machine learning and artificial intelligence algorithms”. In other words, data analysis without heavy dependency on data analysts and data scientists.

In a nutshell

To be frank, when researching about 2018 in tech, the major stories dominating the year were unfortunately in negative spotlight. Perhaps it has come to this point where the society would be challenged more than ever to consider about privacy and ethics – that is, if you can get digital natives to care.

2019 would be a challenging year for all given the growing uncertainty in the global geopolitical and economic landscape – grave changes would certainly have a knock-on effect upon the tech industry. But if we were to learn anything from recent human history, it is that technological progress would take place at its pace regardless the global circumstances – the more pertinent question would be: where, then, would it stem from?

Certain leaders would really need to be reminded that when certain areas of the world lose its global prowess, whether through regressive policies or isolationism, other places would take its place.

Nevertheless, let’s enter the new year with fresh hope and optimism – for the unknown future presents boundless opportunities. The ball now is in our court.

Sidetracked: Black Mirror: Bandersnatch, Netflix’s first interactive film for adults

Firstly, the reason behind the word “sidetracked”: I originally planned to publish a year-ender post to recap the year 2018 and preview 2019 in tech (in which the post is still being drafted), but having watched Black Mirror: Bandersnatch, I felt a sense of urgency to comment a thing or two about the show and its concept of using “Choose Your Adventure” interactive element as a way of storytelling – hence the original plan is being sidetracked.

And also the fact that writing about a show could be deemed as a sidetrack to what this site usually discusses. But I will try to somehow inject some sort of relevancy nonetheless.

For those (regrettably) uninformed, Black Mirror is a television series created by Charlie Brooker which falls under the genre of “science fiction”, “dystopian”, “satire”, “horror” and “anthology” as described by Wikipedia. The show premiered two seasons on Channel Four, and later was purchased by Netflix where the show continued for another two seasons. Season 5 is expected to be released in 2019, whereas Bandersnatch was released on 28 December 2018.

Bandersnatch is an interactive film where the audience makes decisions for the main character, Stefan Butler, a young programmer that attempts to adapt a fantasy novel into a “Choose Your Adventure” video game. You could see how “meta”-esque this film can be: viewers play a game to choose the narrative of a young programmer which designs a game allowing players to choose their own narrative.

Without trying to spoil too much of the story, I felt that this film was a critique on Black Mirror fans, which usually takes pleasure in the demise of show’s main characters. This was even explicitly proclaimed by the main character in one of the endings having experienced a “break the fourth wall” element.

This film is also taking an aim at the nature of “Choose Your Adventure” games (and now, films), where there is an illusion of free will for the player-audience. The film has multiple ways to end, but only a handful can be considered seriously as possible conclusions to the story. Certain pathways would lead to the story ending abruptly, while others will leave audience in a rather unsatisfactory end; either way, the film will offer choices for the audience to go back and alter their previous decision(s). In short, the choices offered at each point to the player-audience is an illusion; the player-audience is still subjected to the story line set out by the show’s creators and writers. And in a way, the player-audience can relate to the film’s main character when he suspects of being controlled.

Now, coming back to how a discussion of Bandersnatch may be relevant to a site that discusses about tech. It is obvious that this film is unlike most (if not any) other films due to its ability to get viewers involved actively in the progression of the show’s story. And this is made possible thanks to the show being on the online-based Netflix; this film would not happen on terrestrial television which lacks the possibility of user input and engagement due to the sheer state of infrastructure.

It is then not surprising that Bandersnatch, despite being one-of-a-kind, is not the first interactive show on Netflix: there are at least two shows for children which enables audience to choose their own adventure as well.

Could the idea of play-watching be the future of television entertainment moving forward? (And I meant beyond real-time game shows, which are a thing now.) I predict that more show creators would see this as a viable way to express creativity, although it would involved a far more complex production than conventional shows – Bandersnatch is timed at around 90 minutes, but all footage for the production reportedly clocked at over 5 hours.

Personally, as much as I enjoy the “Choose Your Adventure” show in general, I felt that the pursuit of other conclusions apart from the first conclusion reached is a confusing method of storytelling at first, especially if different endings are supposed to differ drastically from each other – you would not know which was the “true” ending intended by the showrunners.

But as far as Black Mirror: Bandersnatch is concerned, all endings lived up to what Black Mirror is known for: its psychiatric, dystopian darkness that amass a cult following comparable to other major cable TV shows.

I like how Vox have reviewed the film (and how The Verge mentioned about Reddit detectives cracking all endings and Easter eggs), but by now there would be many reviews from different sources, ranging from news sites to YouTube channels published. So go ahead and choose your own review to read. Or better yet, choose your own adventure by play-watching yourself.

From What I Did: Takeaways from My First Datathon with Data Unchained 2018 by Axiata

Some context about this post: It all started with a LinkedIn message from Phung Cheng Shyong asking whether I would be interested in participating a datathon as he was looking for teammates. As someone who was not from a data science background, the first thoughts were “What’s a datathon?” and “Pfft, are you looking for the right person?”. But on second thoughts, the event is certainly curious and interesting to an outsider. One thing led to another, and there was I, pictured with the rest of team ALSET as above, having endured a 24-hour challenge of brains while battling fatigue over time limitation.

Here are some thoughts and lessons learned from the event:

  1. To really become proficient in data science, one would need to have hands-on experience trying/working on datasets (be it for work or hobby) – tutorials alone is sorely insufficient. It is when working on datasets, and attempting to find certain insights through executing certain models, that one realises what needs to be done. Team ALSET is grateful for the sole data scientist, Cheng Shyong who has done prior data analytics work, both for work and hobby. But even so, he cannot complete the challenge with his experience alone, which is why…

  2. Stack Exchange, Kaggle and other knowledge-exchange sites for data science are all-so important, whether it is to use new analytic tools, or as a refresher on the methods and procedures previously learnt. These sites serve as a guide on how the coding work should go about especially for the new tools required, and also serve as a troubleshoot companion when the coding work was found erroneous. From here, I could see why the data science community is quite a close-knit one as a result of the openness in exchanging knowledge.

  3. A business model that exploits the prediction models is more preferable than a business model that does not. The nature of the event placed emphasis on the business relevance of the data analytics work done, which meant that teams with only technical-heavy people on-board may not necessarily be at advantage if the team falls short in effectively communicating the ways on how the models can be applied or used in a business setting. For this, team ALSET is grateful to have 2 MBA candidates from Asia School of Business, Maksat Amangeldiyev and Saloni Saraogi, to help with the business case portion of the challenge.

  4. Do not underestimate the power of sleep and naps. From this experience, I can testify that a 3-hour sleep at 4am is barely enough to take through the afternoon, especially for someone who is not a night person nor accustomed to working with less sleep. An advice from a teammate to take a short nap after lunch proved to be effective as an energy recharge to last the rest of the day.

  5. Keep an open mind, and be optimistic. Our presentation featured a short video showcasing how the proposed solutions of our business model may look like – to pull off this within the limited time frame seemed (from a personal view) impossible at first. However, Maksat and Saloni have leveraged on their resources and connections to turn this into a reality, which goes to show that ideas should not be discounted altogether at first thought. The both of them have also displayed admirable level of optimism and positivity, which was a great driver to push the team to perform even when the prospects of success seemed minute. Perhaps such optimism is one of the crucial things that defines a successful person – one that is able to become the positive energy around others even when the goings get tough.

At the end of the day, I believe that this event has provided more than just mere experience; it has provided the opportunity to meet and know different people, and to learn lessons from them.

(A shout-out to Low Yen Wei for suggesting the takeaways to be written into an article. This article is also published as an article at LinkedIn: https://www.linkedin.com/pulse/from-what-i-did-takeaways-my-first-datathon-data-unchained-yau/ )

Quick Take: Malaysia’s 2019 Budget and Technology

Recently, I received a suggestion from a reader to write on the recent Malaysia’s Government Budget for the 2019 fiscal year. I see this post as a nice break from the usual posts of explaining technology topics. After all, the Government Budget for 2019 is indeed a topic relevant to the future – so why not have a quick glance at it?

And as the title suggests, it should be quick – like 2, 3 minutes quick.

Before I go into some of the specifics, it is imperative to note that this is the first Government Budget tabled since the Pakatan Harapan government was installed in May 2018. The overarching theme in the budget is fiscal consolidation after the mismanagement of public funds by the previous government according to Finance Minister Lim Guan Eng. Notwithstanding the trade war between the US and China looming over the global economic landscape, it is certainly a challenging government budget to accommodate the multiple considerations, and subsequently to commit.

Here are the few points I have gathered:

1. A “yay” for the businesses

Now, the proposed measures would not be as fancy as the “Malaysian Vision Valley” of that sort, but companies in the tech industry has commended on the constructive measures that aimed at helping businesses, which includes a series of initiatives under the National Policy on Industry 4.0 or known as Industry4WRD (nope, that is not a spelling error – it is supposed to be pronounced as “industry forward”).

The proposed measures include:

  1. RM210 million to support businesses in transitioning to Industry 4.0, where the Malaysian Productivity Corporation will help the first 500 SMEs to undergo readiness assessment for this migration;
  2. RM2 billion under the Business Loan Guarantee Scheme (SJPP) to support SMEs in investing into automation with guarantees up to 70%;
  3. RM3 billion for the Industry Digitalisation Transformation Fund at subsidised 2% interest rate to support adoption of smart technology among businesses;
  4. RM2 billion to be set aside by government-linked investment funds to co-invest on a matching basis with private equity and venture capital funds in strategic sectors and new growth areas;
  5. RM2 billion Green Technology Financing Scheme with a 2% subsidised interest rate for the first 5 years.

Of course the full list is longer, but you get the gist of it.

2. A “yay” for infrastructure

Prior to the budget announcement, the government has already facilitated a round of price reduction for fibre internet services. However, this has drawn some flak from people living in areas without access to such services. Possibly as a response to that, the government announced allocation of RM1 billion for the National Fibre Connectivity Plan, which aimed to provide rural and remote areas internet speed of 30Mbps within five years.

3. A “huh” for the ordinary folks

Now, the “huh” word is an ambiguous expression, which probably is fitting to describe the takeaways from the Budget for the layman on the street.

On one hand, the government has announced a RM10 million allocation to Malaysia Digital Economy Corporation (MDEC) towards the development of eSports in the country. This seems like a boost to the e-gaming community and industry where this measure is perceived as a step in giving due recognition to an area that is, frankly, still riddled with stigma from certain sections of the society.

On the other, the government has announced plans to impose tax on imported online services beginning January 2020 – and yes, this includes Netflix, Spotify and Steam as shown on the Budget presentation screen. The specifics on how the tax will be imposed has yet to be announced, so some clarity in this space is required.

To add on, there is no announcement on personal tax relief for purchase of devices – presumably this may be absent from the list of tax benefits for the 2019 Tax Year.

And then there is the peer-to-peer property crowdfunding platform announced for first-time aspiring owners. There has been quite an amount of buzz surrounding the idea of crowdfunding one’s way to owning a house, with some netizens claiming it as nothing more than a glorified version of a Rent-To-Own scheme, while others describing the measure as a prelude to a subprime mortgage crisis. Now, I would not be able to comment given my employment with one of the stakeholders involved in this platform, but if anything, there should be clarity to the ordinary people on the street on the details of this initiative.

So that’s about it – a brief look on what the Malaysian 2019 Government Budget bode for Digital Malaysia.

References

Full Budget Speech: https://www.thestar.com.my/news/nation/2018/11/02/here-is-the-full-speech-by-finance-minister-lim-guan-eng-during-the-tabling-of-budget-2019/

Compilation of views from the tech space by Digital News Asia:

https://www.digitalnewsasia.com/digital-economy/budget-2019-new-technology-drive-dynamic-economy;

https://www.digitalnewsasia.com/digital-economy/pikom-welcomes-budget-incentives-growth-digital-economy

A netizen’s comment on property crowdfunding platform initiative: https://klse.i3investor.com/blogs/purelysharing/180971.jsp

Image by Andre Gunaway of Tech in Asia: https://www.techinasia.com

From What I Read: Internet of Things

I have been thinking on how I should begin this post, since this is the comeback post after a 2-3 months hiatus. And then I thought, perhaps it is a nice time to review the way I write these posts. I found myself to have fallen into a trap – one that entices the person to write a bunch of words that may end up conveying little. The content that I wrote might look too overwhelming and tedious to read, but leaving readers walking away having learned not as much.

Maybe it is time to force upon myself the KISS principle – Keep It Short and Simple. Less, when done right, could be more.

With that said, I am reviewing the format of how “From What I Read” is written, beginning with the set of questions I seek to answer. Previously, there were 5 questions: What is the subject about; how does it work/come about; how does it impact positively; what are the issues; how do we respond. Seeing that the questions may have overlapping elements, it would be better to group them to just three main items: The Subject – to address the definition and operating principles behind the subject topic; The Use Cases – to outlay functional examples and proposed ideas of applications; The Issues – to outlay problems surrounding and arising from the subject topic. For now, I would be toying with the idea of embedding the “how do we respond” component throughout the three items, but also to reiterate under a conclusion.

Secondly, the style of writing would also be reviewed. Although there are merits to an academic-style of writing, the layman audience (whom the posts are written for in mind) by and large may not be able to appreciate it. This revamp would be a harder challenge than narrowing down 5 sub-topics to 3 since writing style is something embedded in the writer – but hey, if you don’t try, you may never know. (Of course, the idea to write slightly more academically is to appropriately attribute ideas to the respective authors where I sourced it from – but I guess the readers here, if there are any, do not really care as long as there is a list of references. I will probably hyperlink where the main ideas are sourced from instead of writing the authors name here and there.) And yes, I think I should inject some casualness in the writing, just to experiment with styles.

With all things considered, let’s get started with Internet of Things (IoT).

The Subject

When it comes to IoT, some of us may have seen some cool video clips of how a futuristic home would look like: from automatic doors and windows, to automated climate control (some fancy word for air-conditioning), smart refrigerators, and now to even pre-warming your bed before your slumber (for those in temperate climates of course, beds like those would not be very welcomed under tropical weathers). Well, some of these things are really not that far off from reality, thanks to advancements in the technology of connectivity and electronics.

IoT, if I may explain in simplicity, is the enabling of devices to “communicate” to each other by being connected to the internet (or to each other). This implies the idea of controlling the turning on and off (and even fine-tuning the settings) of the devices, be it through programming where the input of the surroundings or other devices’ reaction (like cameras would be turned on after motion sensors are triggered upon detecting movements), or remotely through an external device (like smartphones) being connected to the internet (or to a private network).

So how do these IoT devices “talk” to each other, and even “instruct” to each other? The network that connects these devices set the backstage to how the communication is facilitated. Devices could be connected via Ethernet or Bluetooth (for shorter, closer range), WiFi (for medium range), or LTE and other satellite communication (for wider, larger range of coverage). Processing of data from the sensors (of the devices) will be done in the “cloud” (servers), but it is expected that as device technology develops, processing would be conducted on-device before relaying useful data back to the cloud.

The Use Cases

Currently, one of the most prominent functional examples of IoT is smart speakers, such as Google Home and Amazon’s Echo, where users could set reminders and timers, obtain information such as news, and even do online shopping via voice command. When integrated with other smart devices such as smart light-bulbs, users can further make home automation a reality. Such a case for IoT does not merely serve as fancy tricks when receiving guests at home, but is ever more meaningful to the elderly and disabled where physical movements may be limited and constrained.

Of course, IoTs may also help consumers in planning and managing resources. There are ideas where smart refrigerators may able to detect shortage in grocery supplies and alert users on restocking them (and even offer to order for them). On the other hand, smart climate control systems would able to proactively control energy usage to achieve the desired indoor temperature, which would aid in energy efficiency.

On a more macro scale, there are cities have used IoT for traffic control and management, and some for waste management. That being said, implementing and developing smart cities would require a huge sum of public investment, and the reception and adoption of IoT by the public masses. However, as autonomous vehicles increasingly becoming a reality, there will also be a significant progress in the furtherance of smart city technologies to facilitate integration with these smart cars.

Outside of consumer applications, IoT has found its place in businesses as well. There are use cases for IoT in dairy farms, where the health of livestocks are monitored. In the healthcare industries, IoT could be deployed for real-time tracking and assistance of patients from a remote location, such as to deploy assistance as soon as sensors having detected patients’ falls. On a more broader use of IoT in industries, there are use cases in the form of smart security systems and smart air-conditioning systems to provide effective and efficient control of the environment.

Coming back to consumer applications, having sensors in smart devices would aid in relaying data regarding the device’s performance back to the manufacturers. This would allow manufacturers to perform better after-sales service, such as maintenance repairs and replacements. This may enhance the value proposition offered by enterprises to consumers.

And in line with enhancing value proposition, businesses may be able to understand their customers better as they gather more data from the smart devices used, and further provide tailored solutions to the customers’ needs.

The Issues

I think this point of the post is most ideal to highlight the elephant in the room – data privacy. Since the sensors of smart devices detect users’ actions before storing and relaying data, unquestionably data of individuals would be collected somewhere – and as we discussed earlier, companies manufacturing these devices are collecting them. Of course, not all of such companies built their IoT business model on selling these data, but the phrase “not all” suggested some companies do.

Some might think that data about one’s room temperature would not too huge a matter to fuss about. But keep in mind that with multiple inputs of data combined in analysis, one’s last activity could be figured out – not something we would necessarily want a third-party to know.

This brings us to another related topic – security. Flawed IoT networks and devices could be susceptible to attacks by hackers. Remember the smart speakers mentioned earlier? While individuals may not mind trivial conversation at home being eavesdropped, a compromised smart speaker in the office setting would have serious consequences.

And then there is the dependency on the Internet to function, which poses significant concentration risk on the internet and electricity infrastructure. In a world with devices being connected to the internet, electricity and the internet itself will be rendered as “too big to fail”. In an event when electricity and the internet “does fail”, the outcome could range from being annoyed by non-functioning household appliances, to being “imprisoned” by non-functioning smart locks.

All these concerns aside, we must be cognisant that IoT would be dependent on availability of high-speed internet, and that they would be taking up a lot of bandwidth from the broadband. This presents a two-fold problem: one, we may need to partition part of the high-speed internet some of us currently enjoy for the smart devices to transmit data to the cloud; two, not all of us would have reliable access to high-speed internet at this point in time, nor in the foreseeable future. Thus, until broadband services is made affordable and accessible to more people, IoT would struggle to take off into mass adoption.

The Takeaway

The use cases and issues highlighted were probably a mere tip of an iceberg of how IoT could impact our everyday lives, providing us indicators on how we may need to transform the way we do things currently in the journey to widespread IoT adoption.

And like most other topics highlighted previously, the recurring theme of concern is personal privacy – how much are we, as a society, would be willing to sacrifice privacy (which constitutes personal liberty and freedom) for convenience?

Or perhaps, such a question may soon lose its relevance to a generation of people that are born with smart devices and FB-enabled services where we are increasingly indifferent to sharing data of our digital identity in exchange of the convenience from filling up an entire sign-up form.

References

What is the Internet of Things? WIRED explains by Matt Burgess: https://www.wired.co.uk/article/internet-of-things-what-is-explained-iot

What is the IoT? Everything you need to know about the Internet of Things right now by Steve Ranger: https://www.zdnet.com/article/what-is-the-internet-of-things-everything-you-need-to-know-about-the-iot-right-now/

Your terrible broadband will kill the Internet of Things dead by Steve Ranger: https://www.zdnet.com/article/your-terrible-broadband-will-kill-the-internet-of-things-dead/

A Simple Explanation of ‘The Internet of Things’ by Jacob Morgan: https://www.forbes.com/sites/jacobmorgan/2014/05/13/simple-explanation-internet-things-that-anyone-can-understand/#72f188651d09

What Is the Internet of Things? by Fergus O’Sullivan: https://www.cloudwards.net/what-is-the-internet-of-things/

Smart cities: A cheat sheet by Teena Maddox: https://www.techrepublic.com/article/smart-cities-the-smart-persons-guide/

The Smart Way To Build Smart Cities by HBS Working Knowledge: https://www.forbes.com/sites/hbsworkingknowledge/2018/04/04/the-smart-way-to-build-smart-cities/#31df532b7b19

P.S. By the way, part of the reason why I chose to write about IoT is because of the upcoming Axiata Data Unchained 2018 datathon, in which I would be participating in – will try to document and put into a future post.