10 Most Common Web Security Vulnerabilities

For all too many companies, it’s not until after a breach has occurred that web security becomes a priority. During my years working as an IT Security professional, I have seen time and time again how obscure the world of IT Security is to so many of my fellow programmers.

An effective approach to IT security must, by definition, be proactive and defensive. Toward that end, this post is aimed at sparking a security mindset, hopefully injecting the reader with a healthy dose of paranoia.

In particular, this guide focuses on 10 common and significant web security pitfalls to be aware of, including recommendations on how they can be avoided. The focus is on the Top 10 Web Vulnerabilities identified by the Open Web Application Security Project (OWASP), an international, non-profit organization whose goal is to improve software security across the globe.

A little web security primer before we start – authentication and authorization

When speaking with other programmers and IT professionals, I often encounter confusion regarding the distinction between authorization and authentication. And of course, the fact the abbreviation auth is often used for both helps aggravate this common confusion. This confusion is so common that maybe this issue should be included in this post as “Common Web Vulnerability Zero”.

So before we proceed, let’s clearly the distinction between these two terms:

Authentication: Verifying that a person is (or at least appears to be) a specific user, since he/she has correctly provided their security credentials (password, answers to security questions, fingerprint scan, etc.).

Authorization: Confirming that a particular user has access to a specific resource or is granted permission to perform a particular action.
Stated another way, authentication is knowing who an entity is, while authorization is knowing what a given entity can do.

nonameHosts ready to offer VPS in Dallas

To meet growing demands of our customer we are happy to announce the launch of our new location in Dallas, USA. The decision to open a Dallas datacenter will have a positive impact on our customers throughout the region, in terms of reduced latency and other regulations.

DC room

Separation Anxiety: A Tutorial for Isolating Your System with Linux Namespaces

With the advent of tools like Docker, Linux Containers, and others, it has become super easy to isolate Linux processes into their own little system environments. This makes it possible to run a whole range of applications on a single real Linux machine and ensure no two of them can interfere with each other, without having to resort to using virtual machines. These tools have been a huge boon to PaaS providers. But what exactly happens under the hood?

These tools rely on a number of features and components of the Linux kernel. Some of these features were introduced fairly recently, while others still require you to patch the kernel itself. But one of the key components, using Linux namespaces, has been a feature of Linux since version 2.6.24 was released in 2008.

Anyone familiar with chroot already has a basic idea of what Linux namespaces can do and how to use namespace generally. Just as chroot allows processes to see any arbitrary directory as the root of the system (independent of the rest of the processes), Linux namespaces allow other aspects of the operating system to be independently modified as well. This includes the process tree, networking interfaces, mount points, inter-process communication resources and more.

Keep Reading

A Data Engineer’s Guide To Non-Traditional Data Storages

Data Engineering

With the rise of big data and data science, many engineering roles are being challenged and expanded. One new-age role is data engineering.

Originally, the purpose of data engineering was the loading of external data sources and the designing of databases (designing and developing pipelines to collect, manipulate, store, and analyze data).

It has since grown to support the volume and complexity of big data. So data engineering now encapsulates a wide range of skills, from web-crawling, data cleansing, distributed computing, and data storage and retrieval.

For data engineering and data engineers, data storage and retrieval is the critical component of the pipeline together with how the data can be used and analyzed.

In recent times, many new and different data storage technologies have emerged. However, which one is best suited and has the most appropriate features for data engineering?

Keep Reading

An HDFS Tutorial for Data Analysts Stuck With Relational Databases

Introduction

By now, you have probably heard of the Hadoop Distributed File System (HDFS), especially if you are data analyst or someone who is responsible for moving data from one system to another. However, what are the benefits that HDFS has over relational databases?

HDFS is a scalable, open source solution for storing and processing large volumes of data. HDFS has been proven to be reliable and efficient across many modern data centers.

HDFS utilizes commodity hardware along with open source software to reduce the overall cost per byte of storage.

With its built-in replication and resilience to disk failures, HDFS is an ideal system for storing and processing data for analytics. It does not require the underpinnings and overhead to support transaction atomicity, consistency, isolation, and durability (ACID) as is necessary with traditional relational database systems.

Moreover, when compared with enterprise and commercial databases, such as Oracle, utilizing Hadoop as the analytics platform avoids any extra licensing costs.

One of the questions many people ask when first learning about HDFS is: How do I get my existing data into the HDFS?

In this article, we will examine how to import data from a PostgreSQL database into HDFS. We will use Apache Sqoop, which is currently the most efficient, open source solution to transfer data between HDFS and relational database systems. Apache Sqoop is designed to bulk-load data from a relational database to the HDFS (import) and to bulk-write data from the HDFS to a relational database (export).

Keep Reading