Phần 2 của loạt bài viết về Xu hướng Công nghệ 2018 gồm có:
2. Lực lượng lao động Không - Cổ
3. Chủ quyền Dữ liệu Kinh doanh 

Xem Báo cáo Công nghệ 2018
Dưới đây là tóm tắt các điểm chính: 
(Note: tập trung vào Nội dung lõi và phần Hành động)

2. Lực lượng lao động Không - Cổ

Bộ kỹ năng tư duy cho lực lượng lao động Không - Cổ

Workers (and bots) of the world, unite!

Culture. creating new ways of working within a culture of human/machine collaboration.hybrid talent . Work culture becomes one of augmentation.
Tech fluency. an augmented workforce model. technologies as components of work. 
HR for humans and machines. questions about sourcing and integrating intelligent machines into your work environments.
Leading by example
the no-collar trend may disrupt IT, finance, and customer service, so too could it disrupt HR organizations, their talent models, and the way they work
Skeptic’s corner
Misconception: automation
Reality: technology cannot duplicate many uniquely human workplace strengths such as empathy, persuasion, and verbal comprehension.
Misconception: all workers?
Reality: Becoming conversant in technology can help workers of all backgrounds understand not only the realities of today but the possibilities of tomorrow.

Lessons from the front lines


Risk implications

The Center for Cyber Safety and Education has predicted that there will be 1.8 million unfilled cybersecurity positions by 2022
a cyber professional’s job includes collaborating with peers to build knowledge of attack mechanisms and to develop creative solutions
teams augmented with robotic process automation could experience friction derived from the dynamic of mission-based humans versus rules-based bots. 
Bots can help mitigate cyber risk by automating control activities to facilitate reliability, consistency, and effectiveness.
But bots themselves could be targets in an attack, exposing sensitive employee and customer data that could damage a company’s reputation.
Because this entails more than equipment decisions, comprising policy and personnel strategies as well, business and IT should work together closely to define cyber workplace guidelines to mitigate risk.
As we automate tasks and augment workers, new regulatory and compliance issues may emerge. Privacy issues,
labor laws
Automation of functions such as threat intelligence, security application monitoring, and privilege management may result in greater consistency, reliability, and coverage.

Global impact

Where do you start?

Building a no-collar workforce requires deliberate planning.
  • Assess your needs: Is the no-collar trend a viable option for your company? To answer this question, identify all the areas in your organization where mission-critical activities that do not contain uniquely human work elements occur. Are there opportunities to augment human performance in these areas? If so, are the opportunities compelling? In some companies, augmentation opportunities are potentially transformative; in others, not so much. Remember: Let needs, not technology, drive your strategy.
  • Understand how work currently gets done: For each task within a given process, identify who is performing the task, the skills required to complete the task, and the technologies enabling not only this specific task but adjacent or dependent tasks within the larger process. This informational baseline will help you challenge your own assumptions about existing processes, and then explore different talent options and technologies that can be used in concert to improve overall process efficiency. It may also spark fresh ideas about the impact that automation will have on your organizational structure.
  • Categorize skills and tasks: Define the difference between skills that only humans have, such as ethical or creative thinking, and nonessential tasks that machines can perform. Understanding that difference can eventually help you redesign jobs, identify opportunities for augmentation, and develop automation strategies.
  • Investigate tools and tactics: What cognitive technologies and advanced robotics solutions are currently used in your industry? What new advances appear on the horizon? The speed of technological innovation is bringing disruptive tools online faster than ever. In this environment, IT, HR, and business leaders should stay up to speed on advances in intelligent automation, and try to identify how emerging capabilities and concepts could impact productivity and job design at their companies.
  • Easy does it or full steam ahead? Different smart technologies require different approaches. Are you sufficiently ambitious to explore opportunities for “brute force” automation initiatives involving bots? Or does your ambition (and perhaps your budget) align more closely with less disruptive deployments of cognitive technologies or AI? Which approach better supports your organization’s overall mission and strategic priorities?

Advances in artificial intelligence, cognitive technologies, and robotics are upending time-honored assumptions about jobs, careers, the role of technology in the workplace, and the way work gets done. The no-collar trend offers companies the opportunity to reimagine an entirely new organizational model in which humans and machines become co-workers, complementing and enhancing the other’s efforts in a unified digital workforce.
Anthony Abbatiello is a principal in Deloitte Consulting LLP, based in New York.
Tim Boehm is a principal with Deloitte Consulting LLP and is based in Houston.
Jeff Schwartz is a principal with Deloitte Consulting LLP, based in New York.
Sharon Chand is a principal with Deloitte & Touche LLP's Cyber Risk Services practice and is based in Chicago.

3. Chủ quyền Dữ liệu Kinh doanh

If you love your data, set it free
data should be free—free not in a monetary sense but, rather, in terms of accessibility and ubiquity.
Their efforts will focus on solving data challenges in three domains: management and architecture, global regulatory compliance, and data ownership.
The enterprise data sovereignty trend offers a roadmap that can help companies answer these and other questions as they evolve into insight-driven organizations. Without a doubt, this transition will require long-term investments in data integration, cataloging, security, lineage, augmented stewardship, and other areas. But through these investments, companies can create a dynamic data management construct that is constantly evolving, learning, and growing.

Data, then and now

Many companies took one of two basic approaches for dealing with data:
Laissez-faire. they built one-off systems to address specific needs. Data warehouses, operational data stores, reports, and ad-hoc visualization ruled the day,
Brute force. creating a citadel in which data was treated as scripture. To maintain data consistency and quality, companies relied heavily on mandates, complex technologies, and manual procedures.
Fast-forward two decades. Both of these approaches have proven inadequate in the age of big data, real-time reporting, and automation, especially as data continues to grow in both volume and strategic importance.
What will advanced data management and architecture look like in my company?
we are talking about much more than how and where data is stored. We are also describing:
  • Sourcing and provisioning of authoritative data (for example, batch, real-time, structured, unstructured, and IoT-generated), plus reconciliation and synchronization of these sources
  • Metadata management and lineage
  • Master data management and unique identifiers
  • Information access and delivery (for example, analytics and upstream/downstream consuming applications)
  • Security, privacy, and encryption
  • Archiving and retention
Though architectures vary by need and capability, most advanced data management architectures include the following components:
  • Ingestion and signal processing hub:
  • Dynamic data fabric:
  • Data integrity and compliance engine: 
  • Cognitive data steward:
  • Enterprise intelligence layer:
Who should “own” data in my organization? 
Currently, many organizations employ a data steward who focuses primarily on data quality and uniformity. 
With data increasingly a vital business asset, some organizations are moving beyond simple data management and hiring chief data officers (CDOs) to focus on illuminating and curating the insights the data can yield. 
How do global companies meet regulatory requirements that vary widely by nation? Data hosted on cloud services and other Internet-based platforms is subject to the jurisdiction of the countries where the data is hosted or stored.
On May 25, 2018, the European Union will 
the General Data Protection Regulation (GDPR) 
Currently, global companies have several technology-based options to aid in meeting the letter of jurisdictional laws.
Finally, as any good CDO understands, draconian regulation of a particular jurisdiction may freeze data—with any luck, only temporarily. 

Skeptic’s corner

Misconception: master data solutions
Reality: The process of reconciling the master and working sets was manual and never-ending 
The system requires less up-front rule-making and can teach itself to manage complexity and maintain regulatory compliance consistently across internal and external ecosystems.
Misconception: Even with automation, you still have frontline people inputting dirty data.
Reality: True, workers inputting and manipulating system data have historically introduced more complexity (and dirty data) than the systems ever did. 
when designing their data architectures, companies should consider moving data quality, metadata management, and lineage capabilities away from system centers and relocate them to the edges, where they can correct a human error before it enters enterprise data flows.
Misconception: “Freeing” data will only lead to problems.
Reality: Suggesting that data should be freely accessible does not mean all data should be accessible to everyone across the enterprise at all times.
metadata, dynamic ontologies and taxonomies, and other relational capabilities, the system can have sufficient context to map data content to enterprise functions and processes. Using this map, the system—not users—determines who gets access to which data sets, and why.

Lessons from the front lines

My take

Risk implications

While organizations should continue to implement and maintain traditional security measures, which act as a deterrent to cyber threats, they should also consider the following steps:
Inventory, classify, and maintain sensitive data assets.
Implement data-layer preventative and detective capabilities.
Reduce the value of sensitive data.

Global impact

Where do you start?

The following steps can help you lay the groundwork for the journey ahead:
  • Pay data debt. CIOs think a lot about technical debt—the quick fixes, workarounds, and delayed upgrades that bedevil legacy systems and undermine efficiency. Many companies face comparable challenges with data debt. Consider the amount of money you are spending on one-off data repositories—or the cost, in terms of both time and efficiency, of creating reports manually. A first step in transforming your data management systems is assessing (broadly) just how much data sprawl you have. How many interfaces and feeds connect disparate repositories and systems? With an inventory of systems and data, you can try to quantify how much manual effort is expended daily/monthly/yearly to keep the sprawl intact and functioning. This information will help you better understand your current data capacity, efficiency (or lack thereof), and costs, and provide a baseline for further analysis.
  • Start upstream. Data scientists use technologies such as text and predictive analytics and machine learning to analyze largely unstructured data. This process typically begins at the end of the information supply chain—the point at which users tap into data that has been aggregated. By deploying these and other technologies at the beginning of the information supply chain—where an organization initially ingests raw data—companies can start the process of linking, merging and routing data, and cleansing bad data before data scientists and users begin working with it. This approach helps impose some structure by creating linkages within raw data early on, laying the groundwork for greater storage and management efficiencies. Also, when you can improve data quality at the point of entry by correlating it and performing relationship analysis to provide more context, data scientists will likely end up spending less time organizing data and more time performing advanced analysis.
  • Use metadata, and lots of it. Adding metadata to raw data at the point of ingestion can help enhance data context, particularly in unstructured data such as random documents, newsfeeds, and social media. Greater context, in turn, can help organizations group and process thematically similar information more efficiently, as well as enable increased process automation.
  • Create a cognitive data steward. Raw data is anything but uniform. Any raw data set is likely rife with misspellings, duplicate records, and inaccuracies. Typically, data stewards manually examine problematic data to resolve issues and answer questions that may arise during analysis. Increasingly, we see data stewards use advanced cognitive computing technologies to “assist” in this kind of review—there’s only so much a human can do to resolve these issues. The ability to automate this process can free up data stewards to focus on more valuable tasks.
  • Help users explore data more effectively. Navigating and exploring data can be challenging, even for experienced users. Providing a natural language interface and cognitive computing tools to help guide users as they undertake predictive modeling and advanced searches can turn laymen into data scientists—and help companies extract more value from their data management investments.


As data grows exponentially in both volume and strategic importance, enterprise data sovereignty offers companies a blueprint for transforming themselves into data-driven organizations. Achieving this goal may require long-term investments in data integration, cataloging, security, lineage, and other areas. But with focus and careful planning, such investments can generate ongoing ROI in the form of a dynamic data management construct that is constantly evolving, learning, and growing.


Nitin Mittal is a principal with Deloitte Consulting LLP and is based in Boston.
Sandeep Sharma is deputy CTO and a managing director in Deloitte Consulting LLP’s Analytics and Information Management practice, based in Hyderabad.
Ashish Verma is a managing director with Deloitte Consulting LLP, based in McLean, Va.
Dan Frank is a principal with Deloitte and Touche LLP and is based in Chicago.

Thử nghiệm Donate

Nếu thấy hay thì có thể Donate cho tác giả bằng cách chuyển khoản và để lại tin nhắn theo cú pháp:
Donation - Tên người gửi tiền - Người suy nghĩ  - Lời nhắn.
Trần Việt Anh
STK: 0451000364912, ngân hàng Vietcombank chi nhánh Thành Công, Hà Nội
Cảm ơn đã đọc, và nhớ Donate !