Minimizing insider threats with open-source intelligence
Insider threats rank among the most challenging risks governments face. Trusted insiders under financial pressure, holding a grievance or motivated by other factors can intentionally damage an agency. The results range from information leakage and national security breaches to workplace violence and even reputational damage. Insiders’ unintentional actions can be equally damaging. Clearly, a robust insider threat program that protects government resources, employees and contractors can deliver significant value and reduce associated risks.
To successfully mitigate an insider threat within agencies, officials responsible for insider threat programs need access to more accurate and trustworthy signals that can be used to identify or resolve risks.
Because a complete picture may not always be available, organizations are increasingly turning to publicly available information — one of the most complex and information-rich sources of data. While PAI represents a significant opportunity to deliver the exact information that investigators may need, the challenge they face is that the depth and breadth of PAI is exploding exponentially.
The massive amounts of data that must be analyzed with a zero margin of error can potentially overwhelm officials and subject employees under consideration to unwarranted scrutiny. In addition, organizations implementing a solution at scale to analyze PAI face other challenges such as false positives, errant signals and the time and expense of corroborating information.
Insider threat programs have proliferated across agencies and matured since 2011 when Executive Order 13587 called for their implementation. Since then, the availability of PAI has grown in volume, velocity and number of sources. A key driver behind this massive growth is the plethora of digital social assets and the ability for insiders to anonymize themselves online.
Open-source intelligence, whether gleaned from the surface web or dark web sites, can augment these government initiatives to collect and analyze information posted on social platforms and other media.
Adoption of open-source information at scale to address insider threat risks has been slow for a variety of reasons.
Firstly, attribution or accuracy of information presented on social channels, as well as the deep and dark web, is a key concern. Fraudsters or nation-state and non-nation-state actors with ulterior objectives could take over legitimate social assets. For example, foreign intelligence organizations may potentially take control of government or contractor employee social media accounts and post potentially damaging information that includes both disinformation and misinformation.
Secondly, employees’ privacy and freedom of speech are critical issues for agencies considering the systematic usage of open-source information.
With attribution a concern, the use of social channels’ information must be consistent with privacy guidelines and include opportunities for candidates to address any concerns.
Finally, agencies must consider the effort and risk of scaling the use of PAI from social channels of a single person to potentially millions of government and contractor personnel.
Technological advancements, such as artificial intelligence and identity resolution combined with operational best practices and training, have also made it easier to use publicly available data to better understand the authenticity and risk of a post.
AI can combine information from a variety of different sources to instantly analyze data at volume and generate intelligent insights derived from those sources. Identity resolution is the process of attributing a person’s behavior and interactions — across multiple platforms or channels — to a single unified profile.
Combined, AI and identity resolution can derive more accurate and meaningful insights from PAI. Potential benefits to agencies include fewer false positives and the ability to identify undisclosed information that could pose a risk to government personnel, facilities or networks.
In addition, subject matter and operational expertise for using the deep and dark web is an integral component of any potential application dealing with open-source information. For example, any signals generated from a deep or dark web inquiry will require manual reviews and an understanding of how information is posted in these locations before they can be accurately attributed to a person.
Making open-source work
Using open-source information at scale can be a game-changer. Success, however, will require government and industry working together strategically to update policies for collecting and using open-source data. It is also incumbent upon industry to engage government officials and provide updates about the latest technologies and best practices. At the tactical level, government and the vendor community can work together to identify and adopt the tools and processes for including PAI into a systematic insider threat risk management program.
To do this, government agencies need an approach, ideally fortified via AI and identity resolution, for gathering large data pools and specific insights. A set of thoughtfully constructed review guidelines, supported by human oversight, will help agencies rapidly verify the data. They also need the ability to swiftly identify attribution and create intelligence-rich reports for use in cases requiring prosecution or disciplinary action.
Ultimately, the goal is to confidently match a digital communication to its owner and assess the risk represented. Success will enrich the vetting process, reduce risk — whether intentional or from poor cyber hygiene — deter the insider threat and reduce the overall risk to the government.
mindtalks.ai ™ – mindtalks is a patented non-intrusive survey methodology that delivers immediate insights through non-intrusively posted questions on content websites (web publishers), mobile applications, and advertisements (ads). The conversation is just beginning !, click here to sign-up and connect with other mindtalkers who contribute unique insights and quality answers on this ai-picked talk.