Keil Hubert: Ethically Questionable Subterfuge

How far are you willing to go in order to protect your company? Business Technology’s resident U.S. blogger Keil Hubert recommends that every company discusses this problem and decides where they stand.

On Friday, 17th April, U.S. District Judge Andrew P. Gordon of Nevada ruled that an FBI ruse used to perform covert evidence collection of some suspected wrongdoers was ‘unconstitutional’. I’m not a law scholar, but I suspect that this is a bigger deal than most people appreciate. The short version of the case worked out like this: a bunch of suspected baddies were staying in a chic villa at Caesar’s Palace in Las Vegas. In order to obtain the evidence that they needed to build a case for a search warrant, the FBI cut off Internet service to the suite, and then posed as repairmen in order to gain illicit access to the premises. [1] The agents looked around under the pretence of providing tech support, and left after surreptitiously collecting the video evidence that they needed.

The defence argued that the FBI’s antics were unlawful. Back on 12th November, Ars Technica’s David Kravets wrote about the case and included this (rather ominous) quote:

‘The government now admits that agents entered private rooms using the ruse of shutting off Internet access and dressing up as technicians,’ defense attorney Thomas Goldstein said in an e-mail. ‘Based on the government’s response, agents can use similar schemes to enter any home in America without a warrant and without the slightest suspicion. We hope the Court will recognize this egregious violation of personal privacy.’

It turned out that the judge agreed with the defence this last week, and wrote in his decision:

‘The Fourth Amendment [2] protects “the right of the people to be secure in their persons, houses, papers, and effects against unreasonable searches and seizures.” … This case tests the boundaries of how far the government can go when creating a subterfuge to access a suspect’s premises. Here, the government disrupted the internet [sic] service to the defendant’s hotel room in order to generate a repair call. Government agents then posed as repairmen to gain access to the defendant’s room and conduct a surreptitious search for evidence of an illegal sports betting operation. By creating the need for a third party to enter defendant’s premises and then posing as repairmen to gain entry, the government violated the defendant’s Fourth Amendment rights.’

I agree with Judge Gordon’s analysis, and with Mr Goldstein’s admonition that the government’s subterfuge amounted to ‘misconduct’. Again, I’m not a lawyer, but I grok the gist of both sides’ arguments. We already live in an environment of near-total electronic surveillance. In both the USA and in the UK, our civil liberties have been severely eroded since 9/11 under the laughable pretence that government overreach is somehow ‘needed’ to ‘prevent terrorism‘. It was only natural that someone in the US law enforcement community would try (again) to push the boundaries of the 4th Amendment to see what their agency could get away with. This time, they tried and failed. There will definitely be a next time. There’s always a next time. Those in power always want more power.

‘You’ve been “randomly” selected for additional security screening, sir. Please remove your trousers while I position the lights and cameras.'
‘You’ve been “randomly” selected for additional security screening, sir. Please remove your trousers while I position the lights and cameras.’

That being said, there’s definitely a positive lesson to be learned from the FBI’s casino ruse for those of us who work in IT: you’ll get better threat awareness from delivering tech support at the point of use than you will from just implementing policies and installing software remotely. Eyes-on matters.

Lest you think I’m endorsing the FBI’s duplicity, let me set some context for this point: first, the vast majority of us who work in corporate IT are not considered part of a law enforcement function. Even when we conduct internal investigations of suspected misconduct or engage in incident response, we’re not police. As corporate IT, our remit is to protect the company’s critical information, to protect its ability to operate, and to minimize disruptions. Over 99 per cent of the time, we’re not in the business of collecting or preserving evidence for submission to a court of law. [3]

Furthermore, the rules for IT employee conduct inside a commercial business are quite a bit different than are the rules that LE personnel have to use in the pursuit of criminal misconduct. The owner of a commercial space has dominion over the company’s facilities. Management both sets and enforces its rules, which can be more restrictive than the law sets for public spaces. An employee only has as many ‘rights’ as the company’s employment contract, union agreements, and labour law codify. Sometimes that means significant restraint of privacy rights. Often, the ‘rights’ that we have at work are markedly different from the legal and civil rights that we enjoy outside of the office. It’s very important that everyone – worker and business owner alike – understand those differences.

This is a critical distinction that many employees can’t wrap their heads around. I’ve had terrible rows with executives over whether or not employees had a ‘Reasonable Expectations of Privacy’ in their offices or over their file shares (no, and no, respectively, per our company’s lawyer-blessed policies). Many users draw mistaken conclusions about their ‘rights’ from police procedural shows that don’t reflect the reality of their working environment. One of the simplest ways of expressing this difference can be derived from the employer’s Acceptable Use Policy for using company information systems. Many companies provide computers and Internet access to employees for the conduct of official company business only; if an employee chooses to use their company PC to go browse naughty videos (or even G-rated kitteh videos) on company time, then they can lose their job for misconduct. [4]

As an aside, there’s a significant legal and ethical fight going on right now in American society about just how much an employer can regulate employee conduct outside of work. We’re still working out whether a bad joke or a risqué comment made on the Internet after work hours should cost a worker his or her job. I don’t expect that argument to get resolved for several more years, but the debate is fascinating. It also reflects the very loose grasp that many people seem to have about the overlap (or gap) between internal company controls and legal controls.

’They know! Somehow, they found out that I didn’t staple the cover sheet to my TPS report ...'
’They know! Somehow, they found out that I didn’t staple the cover sheet to my TPS report…’

When it comes to activities conducted at work, however, the rules are often a bit less vague. When you’re on the clock, you’re supposed to be working as directed by your superiors in the manner dictated by the entity that pays you for your time. That’s a perfectly reasonable expectation; if I’m paying you money to generate PowerPoint slides, then I expect you to generate slides. Likewise, if I (as your employer) give you specific instructions on what you can and cannot do with your company equipment while you’re at work, then it’s reasonable and appropriate to expect you to obey those rules. You don’t have to like them, but you do have to abide by them.

That’s why the FBI’s ‘tech support’ gambit actually works for corporate security folks. Most companies have vetted and published policies that give the IT department greater rights than the end users have over the use and control of IT kit since the IT department is responsible for configuring, patching, managing, monitoring, and otherwise keeping end-user equipment safe. That’s why the network administrator is allowed to deploy patches to your PC and force you to reboot no matter how inconvenient the interruption may be for you: the integrity of the business network is considered more important than your individual contributions to the company’s bottom line.

That, in turn, makes it perfectly reasonable for representatives of the IT department to need to visit any corner of the company at any time. Operating systems fail. Zero-day exploits can (and will) arise at any time. There’s always something going wrong with company kit, which means that techs have to be everywhere, all the time. They have an unenviable and unending job to do. That’s why they’re nearly invisible, and why they’re welcome (or, at least, tolerated) everywhere in the company.

That ability to go everywhere without arousing suspicion means that tech support folks are ideally positioned to serve as covert observers. A tech performing a job in the field spends a great deal of time waiting (for patches to load, reboots to finish, etc.) which gives her a perfect excuse to glance around the work area and to listen in on conversations. She’s bored, after all. Just waiting patiently for her job to finish, same as you. She can’t help but notice what’s happening around her.

Good IT security leads take deliberate advantage of this ability to keep tabs on the higher-risk users. By regularly visiting the offices of those users who have a reputation for pushing boundaries and for ignoring company policies, a good security team can detect early manifestations of inappropriate behaviour – preferably before it constitutes a termination-level offense. Often, a calm word in the office cowboy’s ear can head off a disaster before his or her antics become unforgivable.

‘I warned you about using Comic Sans in your bloody PowerPoint decks, Bob. Time to die!'
‘I warned you about using Comic Sans in your bloody PowerPoint decks, Bob. Time to die!’

Therefore, it’s only logical that the FBI’s demonstrated gambit of deliberately engineering a systems failure in an employee’s machine in order to create a reasonable excuse for a tech support employee to gain access to the workspace is potentially valuable. If the dirty trick gives security a plausible excuse to get their hands on a potential wrongdoer’s equipment in order to verify or refute suspected misconduct, then it might be a cost-effective way to gather intelligence without tipping off the suspect that security is on to them. The key word here is ‘might’… it might be worthwhile. It really depends a great deal on just how much risk is involved in allowing the suspect to carry out their nefarious plans. It really depends on what-all is at stake.

Whether or not such a gambit is ethical or not is another matter entirely. Just because it’s legal doesn’t necessary mean that the other employees will accept it as an acceptable tactic after the ruse is uncovered – probably as a side-effect of a termination hearing or an arrest. Once users become aware that IT is deliberately deceiving users in order to gain access to their offices and systems for covert surveillance, they may well decide that management has committed an unforgivable breach of trust against all employees. Here, the key word is ‘may’; employees may see such a ruse as an affront, or they may see it as a legitimate (if distasteful) technique that management is entitled to use in order to pre-empt some unacceptable risk.

The tipping point between what constitutes ‘acceptable’ and ‘unacceptable’ subterfuge is probably the point at which employee behaviour can cause loss of life, either within the work environment or for the general public. Businesses where misconduct can result in death are the ones where draconian security measures start to seem reasonable – hospitals, airlines, refineries, weapons manufacturers, and the like all seem like they’d warrant a robust and aggressive internal affairs program.

The law enforcement community itself probably deserves the greatest level of aggressive internal scrutiny, especially given the propensity for policemen to illegally use lethal force on citizens. For people to have faith and confidence in their LE presence, they need assurances that control measures are both in place and are effective in detecting and in excising unprofessional members of the agency.

What about your company, though? Is what you do at your place of employment so dangerous that it’s ethically and morally worth the risk of employee alienation in order to prevent an appalling tragedy? This isn’t just an idle break room question; it’s a serious discussion that needs to happen between the board of directors, executive management, and the security team. Every business needs to engage in some frank analysis of their risk profile and make a rational, informed decision about just how far they’re willing to go in order to balance the conflicting needs of employee privacy and legal responsibility. There aren’t any easy answers. On the one hand, company lawyers will always recommend caution and conservative restraint in order to avoid unnecessary exposure. On the other, the public (and often the courts) will often condemn a company for not haven taking ‘adequate’ steps beforehand to prevent an unforeseeable tragedy after said tragedy has happened.

‘Why didn’t you take adequate steps to prevent my client from obscuring the “don’t drink and drive” label on her vodka bottle with your dangerous and irresponsible opaque sack?'
‘Why didn’t you take adequate steps to prevent my client from obscuring the “don’t drink and drive” label on her vodka bottle with your dangerous and irresponsible opaque sack?’

That being said, the conversation must take place. Upper management has to understand what all’s at stake and, once informed of the risks, take a definite stand on just how ‘dirty’ they’re willing to play when it comes to conducting internal security operations. Once they make that decision, it’s the security team’s obligation to operate faithfully under those expectations thereafter. Fair warning: this can be a horrifying subject for people who aren’t accustomed to addressing such topics.

To be absolutely clear, I am advocating for management to make that determination, not the security team itself. This is part of the awful burden of leadership: taking responsibility for activities that are necessary but repugnant. The responsibility can’t be delegated or ignored; these decisions have to be made and regularly revisited as culture, law, and circumstances change. As the head of either IT operations or IT security, you have a duty to force the discussion to take place within the highest halls of power. Know where your company stands, and then initiate (or terminate) security activities according to your leaders’ acceptance of risk.

So, then… Is the engineering of a fake tech support incident an effective way to infiltrate a suspected wrongdoer’s domain? Yes, it is. Can tech support personnel be effective at conducting covert internal surveillance operations? Yes, they are. It is legal to trick your users into thinking that they have an IT problem when they really don’t? Maybe. Should your company incorporate this tactic into your arsenal of misconduct countermeasures? Maybe. That’s an essential part of a larger conversation that you need to have with top-level management and legal. If you haven’t already had that discussion and reached a position, then now’s the time: take Judge Gordon’s decision into your boss’s office and initiate the conversation.

Our governments have made the decision for us that it’s acceptable to capture the entirety of every citizen’s Internet communications and telephone calls without a warrant in order to prevent potential acts of future terrorism. If you view that as an acceptable sacrifice of civil liberties, then you’re probably ready to follow the governments’ lead within your company. If, however, you view the governments’ post-9/11 security practices as unacceptable for your company, then it’s probably time to initiate another difficult conversation about who we are, who we intend to be, and what exactly we’re going to do about the mess we’ve gotten ourselves into.


[1] I’m sticking with ‘illicit’ in this sentence, because of Judge Gordon’s decision. At the time, according to the US Department of Justice’s argument, the FBI agents believed that they had to right to conduct this sort of deception.

[2] Fourth Amendment to the U.S. Constitution, that is.

[3] There are those rare exceptions, and that’s why a good cyber security team needs the tools, training, and authority to conduct forensic capture of evidence to support LE activities.

[4] A responsible company makes sure that all employees know the rules for acceptable conduct so that there’s never any question of what is and isn’t allowed.


POC is Keil Hubert, keil.hubert@gmail.com
Follow him on twitter at @keilhubert.
You can buy his books on IT leadership and IT interviewing at the Amazon Kindle Store.

Keil-Hubert-featuredKeil Hubert is a retired U.S. Air Force ‘Cyberspace Operations’ officer, with over ten years of military command experience. He currently consults on business, security and technology issues in Texas. He’s built dot-com start-ups for KPMG Consulting, created an in-house consulting practice for Yahoo!, and helped to launch four small businesses (including his own).

Keil’s experience creating and leading IT teams in the defense, healthcare, media, government and non-profit sectors has afforded him an eclectic perspective on the integration of business needs, technical services and creative employee development… This serves him well as Business Technology’s resident U.S. blogger.

Keil Hubert

Keil Hubert

POC is Keil Hubert, keil.hubert@gmail.com Follow him on Twitter at @keilhubert. You can buy his books on IT leadership, IT interviewing, horrible bosses and understanding workplace culture at the Amazon Kindle Store. Keil Hubert is the head of Security Training and Awareness for OCC, the world’s largest equity derivatives clearing organization, headquartered in Chicago, Illinois. Prior to joining OCC, Keil has been a U.S. Army medical IT officer, a U.S.A.F. Cyberspace Operations officer, a small businessman, an author, and several different variations of commercial sector IT consultant. Keil deconstructed a cybersecurity breach in his presentation at TEISS 2014, and has served as Business Reporter’s resident U.S. ‘blogger since 2012. His books on applied leadership, business culture, and talent management are available on Amazon.com. Keil is based out of Dallas, Texas.

© Business Reporter 2021

Top Articles

How a digital revolution is transforming banking and financial services in Asia

Asia has become the hotspot of digital innovation in the global financial and banking sector.

Conscious customers: a year of change and the UK consumer

As the pace of change continues in the insights industry and beyond, it’s clear that the Covid-19 pandemic has not…

Related Articles

Register for our newsletter