While end-to-end encryption dominates the headlines in light of recent legislative efforts, the encryption of our data in transit is also now more relevant than ever before. As a significant chunk of the American population continues to work from home, VPN traffic from private networks has soared by a whopping 34%, according to Verizon. Many companies have decided on the VPN software they will use to help stay afloat. But for many Linux users, WireGuard has already become the preferred weapon of choice after an exciting announcement made late last year: its official integration into the Linux kernel.
While there are several different VPN implementations to choose from, few come close to the simplicity of WireGuard’s open-source tunneling protocol. Compared to other protocols such as IPSec and OpenVPN, WireGuard is notorious for being lightweight, easy to set up, and (most importantly) highly secure. While it is also available for multiple different operating systems, including Windows, macOS, and FreeBSD, it now has a unique home inside the Linux kernel itself. As of version 5.6, users no longer need to manually download and include the VPN as a kernel module (add-on).
News of WireGuard’s merge could not have come at a more appropriate time. As millions around the globe rely on remote connections to access corporate resources from home, WireGuard becomes the de-facto new standard for point-to-point encryption on Linux. Not only does this decision advance the interests of privacy and confidentiality among users, but has also received overwhelming support among the Linux community, including none other than the creator of Linux himself, Linus Torvalds, referring to WireGuard as “a work of art.”
PCI compliance is a recently emerging standard that provides a structured and secured framework for handling customer data such as credit card numbers and other payment information. For businesses to collect data online or process credit card transactions, they must be PCI compliant. However, an alarming trend has started to emerge among businesses lately because according to study done by Verizon, for the second year in a row, PCI compliance among these businesses has dropped. In fact, just a third of all global firms are PCI compliant. Which means there is strong possibility that the third-party businesses you are dealing with are not actually certified to handle your data or credit card information. This number is a sharp decrease from the previous year where 53% of these international businesses were PCI certified.
The lack of certified businesses is alarming for several reasons. For one, there is a greater chance that your data is being handled improperly by this business, putting your own information at risk. Another reason is that if two-thirds of these businesses are not going to continue to be PCI certified, it does little to encourage the remaining third who spend the money to remain certified, which could lead to even more of these businesses deciding to not be PCI certified. However, just because businesses are not PCI certified does not necessarily mean it is solely their fault. Getting PCI certified is incredibility hard to do in the first place and most businesses struggle with remaining PCI certified after obtaining the certification in the first place.
In order to get PCI certified, a business must pass 12 broad requirements, 78 base requirements, and then over 400 test procedures. With all these hoops to jump through, it’s no wonder most businesses have recently been deciding to let their PCI compliance lapse. In some cases, the cost and time of getting PCI certified would be more expensive than just dealing with the fallout from a cyber-security breach in the first place. In fact, the punishments for not being PCI complaint are not that steep. Depending on the volume of transactions, the volume of clients, and the level of PCI that the company can be on, fines can range from $5000 to $100,000. These amounts are amounts that big companies can makeup without any problem. Overall, the PCI system is a great idea and good attempt at applying a standard for handling customer data online and their credit card information, but there are too many hoops for businesses to jump through which often winds up discouraging its implementation in the corporate world.
Adding to the growing list of standards bodies with which companies must comply to do business, the California Consumer Privacy Act (CCPA) will go into affect on January 1st 2020 (that’s less than a month away) and will likely begin enforcement by July 1st, 2020 at the latest. The focus of the bill is on consumers’ rights to controlling how their information is used and whether or not it is stored, very similar to the goals of Europe’s GDPR.
Who has to comply?
This piece of legislature will apply to any company that does business in California which collects consumers personal information and meets one of the following: The company has an annual gross revenue of $25 million, the company makes at least half of its profit from selling consumer personal information, or handles at least 50,000 consumers’ personal information. In other words, it applies to pretty much every moderately successful site in existence. There are exceptions to this, mostly for those types of data already covered by existing standards or legislature such as HIPAA data.
Some view this as an overreach by the California government and harmful to interstate commerce. Whether or not you agree with that sentiment, you can at least agree that this is a bold move by California, and it was certainly intended to cause the expansion of this policy into other parts of the United States, potentially federally. In fact, several states have already moved to create similar legislature of their own, some being more aggressive than the CCPA.
Companies found in violation are subject to fines, and consumers may have the right to pursue civil suits against companies if they are harmed by misuse of their data.
Consumer Rights Under CCPA
Under the CCPA, consumers have the right to receive notice as to what information companies will be collecting on them. This means we are all probably going to get bombarded with emails again, as was the case with GDPR. The CCPA also requires companies to include this information in their privacy, as well as the user’s rights under CCPA and how the users can exercise these rights.
Consumers have the right to choose to opt-out of programs where their data may be sold to third party entities at any time, and those under the age of thirteen cannot have their information sold unless they opt-in with parental consent. This is a big deal, and will probably have drastic effects on the business models of companies targeted by the clause including companies which make at least 50% of their gross annual income from the sale of personal information. The bill does allow for different pricing in the case that a user should exercise this right.
Importantly, companies cannot discriminate against users who exercise any of these rights, with the exception of the financial incentive for opting into the data sharing programs. This means that a site cannot deny access to any service based on whether a user has deleted their data or requested it, or opted-out of data sharing.
It is great to see that the United States is beginning to catch up to Europe in the field of protecting its citizens’ privacy from abuse by the private sector. I think that this step is necessary if privacy is to exist in any form moving forward as companies wish to collect more and more data so that they can better target our preferences. It will not solve the privacy problem our society is facing by itself, but it is an important step towards taking back some control of our individual privacy as consumers.
It is important to note, however, that this is yet another set of regulations and standards that companies must follow on top of the myriad of other standards they must meet such as PCI, HIPAA, et al. If each state puts forth their own version of this law, this only gets further complicated and strenuous for companies. While standardizing practices is a good thing for protecting consumers, it is important that we do not overwhelm companies with heaps of regulations and standards, as putting too daunting of a gate up often inspires those who are confronted with it to seek another way around it. Surveys have shown that compliance to some standards has been on a slight decline due to the overwhelming amount that must be followed. We must take great care to make fair policy to which adherence is not too great a burden.
Written by, Daniel Szafran
Apologies for the low-tier memes, I didn’t want to brew up my own at 4AM
RIT is rolling out Multi Factor Authentication very soon. Multi Factor Authentication is adding an extra factor to validate your credentials. For example, when you log into RIT services you are prompted your username and password; with the new multi factor authentication, you will need to provide an extra form of authentication. These methods include: Using the DUO mobile app, text, phone call, office phone call, and email. RIT has been experiencing more attacks than ever before, and this is their attempt at mitigating the risk of attacks. Last year MFA was put into effect for faculty, staff, and student employees. This was because many Ebiz accounts became compromised. The attackers then changed direct deposit numbers to be routed somewhere else. Luckily no one lost money because controllers saw the change in numbers and knew what was happening because another university was attacked in the same manner.
Why does this matter to us?
If we do not enroll in MFA by the 24th of October, there will be a hold on your account and you will not be able to enroll for classes next semester.
With MFA comes the use of another device to authenticate yourself on RIT services. For example, if you signed up and planned on using the DUO app, DO NOT forget your phone. ITS will have to give you a Bypass until you can get access to your phone, which would be unfortunate if you need to log onto something ASAP. I personally don’t see why the students need MFA, but I have no choice but to enroll into it.
This article was about malware targeted against Macs that can be hidden in the Mac app store. The writer of the article says that although they found the vulnerability, no one has used it yet from what they can see.
This attack could be used by bypassing the code signing done before submission to the app store. The code signature checks or code signing is basically virtual security checks, to make sure the app is safe and stable. It was noticed that the code only gets checked once, and then the signature doesn’t get checked again. This means that an attacker can make a clean app, submit it to the app store, and then once it gets downloads from users, release an update infected with malware for the users to download. They can also steal or buy real code signatures and put them into their malicious app and it has the possibility of getting published to the app store for everyone to download.
The writer of the main article says, “As a result of this research, Reed himself added code signature verification to Malwarebytes Mac products so they now perform a check every time they launch.” Reed works at the company Malwarebytes and he put out an update to their software to check the code signature again of updates to apps. He even says, “A script kiddie could pull off something like this.” This shows how something should be done to fix this problem before others catch on and start infecting peoples computers with malware. This was released recently, so hopefully, it gets fixed soon. I remember when I made my app for the app store and I do not ever remember any checks being done to my updates after the initial release.