Purpose: I will design and develop a low-cost, virtual home network security appliance that is easy for non-technical users to use. It will detect anomalies by learning network traffic patterns, protect the network by adjusting the firewall, and alert the user to irregularities.
Problem Statement: A single botnet in 2017 compromised more than 500,000 computers. The attacks on home devices are rapidly increasing because the new Internet of Things devices such as the Amazon Alexa or the Google Home pose as easy targets. Trend-Micro reported in 2017 that 1.8 million cyber attacks in a 6 month period were conducted from home networks. On top of this, by 2020, it is estimated that over 50 billion smart home “Internet of Things (IoT)” devices are to be deployed. Many firewalls are available today for protection, but they are not easy to use, and are not effective against current attacks because their response is fixed rather than adjusting as attackers change methods. An appliance with AI software that learns and adjusts to new attack methods should be more effective against the changing attack methods.
Approach: I built an appliance using the AI Library Tensorflow and the Snort API. I used Oracle VirtualBox to simulate a home network and IoT devices. AI software will be supplied with training file containing datasets (presets) of known “Allowed” and “Not Allowed” combinations of “From and To” IP addresses & “To” ports. In training mode, it will learn the general usage pattern of the end user and appends the presets in the training file with any false-positives. Once trained, in operational mode, the AI software will evaluate if a network connection is safe or not, and then either allow or deny the connection. It will alert the owner via email if a connection is denied. The home user can decide to override the AI software evaluation through a friendly interface. If overridden, the training file is further appended with this data.
Results: A total of 4000 tests were conducted in 5 rounds with a different number of training datasets – 500 (Round 1), 1000 (Round 2), 2500 (Round 3), 5000 (Round 4), 12,000 (Round 5). Per round, there were 800 tests for the 8 use cases or 100 tests per test case per round. As the training datasets, increased the accuracy increased as well. Round 1 accuracy was 50.17%; Round 2 was 46.82%, Round 3 was 60.17%, Round 4 was 71.29%; Round 5 was 77.31%. During tests with a low amount of training datasets, the results were inconsistent.
Conclusions: My prototype is user-friendly and adaptive, with an accuracy of 75%+. By deploying the program on single-board PCs like Beaglebone, I can limit the cost to $50-$60. With ransomware payments averaging more than $500, if my prototype at 75% accuracy can prevent one ransomware infection per year for a home user, then it will more than pay for itself. At a recent hackathon conducted by MIT media lab, 25% of Smart home IoT devices were hacked in less than 3 hours. Clearly, the IoT devices are easily compromised and represent higher risk – possibly to life and safety in a life-threatening event.
For future improvements, I would train the AI software to detect true malware based on behavior. Also, I would research other AI libraries and models to see if there is a better fit for this use case.