Human resources and CRM

Identity Intelligence an overview

The need for Identity Intelligence tools and models comes with the awareness, developed in recent years, that an Identity Management system used for the sole purpose of automating the user account management, exploits its possibilities only in part. In recent years the ‘Identity Management solutions are increasingly seen as tools addressed to securitygovernance, tools used increase security, tools used to meet the compliance requirements that organizations must meet in order to satisfy regulatory constraints, to obtain certifications and to satisfy internal and external audit.

The assumption to the adoption of an Identity Intelligence solution is that “you cannot manage what you cannot measure”. In order to properly manage user accounts and identities, you must first be able to get to know them in detail.

Identity management (IdM) is the task of controlling information about users on computers. Such information includes information that authenticates the identity of a user, and information that describes information and actions they are authorized to access and/or perform. It also includes the management of descriptive information about the user and how and by whom that information can be accessed and modified. Managed entities typically include users, hardware and network resources and even applications.

In the real-world context of engineering online systems, identity intelligence can involve three basic functions:

  1. The pure identity function: Creation, management and deletion of identities without regard to access or entitlements;
  2. The user access (log-on) function: For example: a smart card and its associated data used by a customer to log on to a service or services (a traditional view);
  3. The service function: A system that delivers personalized, role-based, online, on-demand, multimedia (content),presence-based services to users and their devices.

The term “Identity Intelligence” has been diffused throughout the course of 2010, also thanks to its adoption by Gartner, and refers mainly to the following set of capabilities:

  • the presence, within an organization, of a full repository of user accounts, able to effectively collect every information characterizing the users and their access rights. The difference is substantial if compared to the “standard” repositories used by the Identity Management solutions, typically simpler and less suited for complex analysis.
  • the ability to relate information from different target and authoritative sources, in order to correctly and efficiently populate the repository. In complex environments, data about users and user accounts are collected from dozens or hundreds of different sources, using different standards, different structures and different technologies. In order to allow quick, detailed and complete analysis, it is essential to have a tool that can collect, relate and homogenize all this data.

The ability to build complex analysis, based on the principles of business intelligence, providing valuable information in relation with:

  • the state of the users within the organization,
  • the quality of the user management processes.
  • an overview of user identities and their access within the enterprise.
  • an ability to relate identity information with various entities within the organization such as assets, resources etc.

At the same time, monitoring and reporting systems which operate on a complete repository, offer security features and advanced control.

Network concept

An overview of windows active directory

Active Directory (AD) is a directory service that Microsoft developed for Windows domain networks and is included in most Windows Server operating systems as a set of processes and services.

An AD domain controller authenticates and authorizes all users and computers in a Windows domain type network—assigning and enforcing security policies for all computers and installing or updating software. For example, when a user logs into a computer that is part of a Windows domain, Active Directory checks the submitted password and
determines whether the user is a system administrator or normal user.

Active Directory makes use of Lightweight Directory Access Protocol (LDAP) versions 2 and 3, Microsoft’s version of Kerberos, and DNS.

Logical Structure

As a directory service, an Active Directory instance consists of a database and corresponding executable code responsible for servicing requests and maintaining the database. The executable part, known as Directory System Agent, is a collection of Windows services and processes that run on Windows 2000 and later. Objects in Active Directory databases can be accessed via LDAP protocol, ADSI (a component object model interface), messaging API and Security Accounts Manager services.


An Active Directory structure is an arrangement of information about objects. The objects fall into two broad categories: resources (e.g., printers) and security principals (user or computer accounts and groups). Security principals are assigned unique security identifiers (SIDs).

Each object represents a single entity—whether a user, a computer, a printer, or a group—and its attributes. Certain objects can contain other objects. An object is uniquely identified by its name and has a set of attributes—the characteristics and information that the object represents— defined by a schema, which also determines the kinds of objects that can be stored in Active Directory.

The schema object lets administrators extend or modify the schema when necessary. However, because each schema object is integral to the definition of Active Directory objects, deactivating or changing these objects can fundamentally change or disrupt a deployment. Schema changes automatically propagate throughout the system. Once created, an object can only be deactivated—not deleted. Changing the schema usually requires planning. Sites are implemented as a set of well-connected subnets.

Forests, trees, and domains

The Active Directory framework that holds the objects can be viewed at a number of levels. The forest, tree, and domain are the logical divisions in an Active Directory network.

Within a deployment, objects are grouped into domains. The objects for a single domain are stored in a single database (which can be replicated). Domains are identified by their DNS name structure, the namespace. A domain is defined as a logical group of network objects (computers, users, devices) that share the same active directory database.

A tree is a collection of one or more domains and domain trees in a contiguous namespace, linked in a transitive trust hierarchy.

At the top of the structure is the forest. A forest is a collection of trees that share a common global catalog, directory schema, logical structure, and directory configuration. The forest represents the security boundary within which users, computers, groups, and other objects are accessible.

Organizational units

The objects held within a domain can be grouped into Organizational Units (OUs). OUs can provide hierarchy to a domain, ease its administration, and can resemble the organization’s structure in managerial or geographical terms. OUs can contain other OUs—domains are containers in this sense. Microsoft recommends using OUs rather than domains for structure and to simplify the implementation of policies and administration. The OU is the recommended level at which to apply group policies, which are Active Directory objects formally named Group Policy Objects (GPOs), although policies can also be applied to domains or sites. The OU is the level at which administrative powers are commonly delegated, but delegation can be performed on individual objects or attributes as well.

Organizational Units are an arrangement for the administrator and do not function as containers; the underlying domain is the true container. It is not possible, for example, to create user accounts with an identical username (sAMAccountName) in separate OUs, such as “fred.staff-ou.domain” and “fred.student-ou.domain”, where “staff-ou” and “student-ou” are the OUs. This is so because sAMAccountName, a user object attribute, must be unique within the domain. However, two users in different OUs can have the same Common Name (CN), the name under which they are stored in the directory itself.

In general the reason for this lack of allowance for duplicate names through hierarchical directory placement, is that Microsoft primarily relies on the principles of NetBIOS, which is a flat-file method of network object management that for Microsoft software, goes all the way back to Windows NT 3.1 and MS-DOS LAN Manager. Allowing for duplication of object names in the directory, or completely removing the use of NetBIOS names, would prevent backward compatibility with legacy software and equipment.

As the number of users in a domain increases, conventions such as “first initial, middle initial, last name” (Western order) or the reverse (Eastern order) fail for common family names like Li, Smith or Garcia. Workarounds include adding a digit to the end of the username. Alternatives include creating a separate ID system of unique employee/student id numbers to use as account names in place of actual user’s names, and allowing users to nominate their preferred word sequence within an acceptable use policy.

Because duplicate usernames cannot exist within a domain, account name generation poses a significant challenge for large organizations that cannot be easily subdivided into separate domains, such as students in a public school system or university who must be able to use any computer across the network.


The merits and demerits of Privileged Identity Management

Privileged Identity Management (PIM) is a domain within Identity Management focused on the special requirements of powerful accounts within the IT infrastructure of an enterprise. It is frequently used as an Information Security and governance tool to help companies in meeting compliance regulations and to prevent internal data breaches through the use of privileged accounts. The management of privileged identities can be automated to follow pre-determined or customized policies and requirements for an organization or industry.

Please also see Privileged password management — since the usual strategy for securing privileged identities is to periodically scramble their passwords; securely store current password values and control disclosure of those passwords.

Different market participants refer to products in this category using similar but distinct terminology. As a result, some analyst firms refer to this market as “PxM” indicating multiple possible words for “x”:

  • Privileged Access Management
  • Privileged User Management
  • Privileged Account Management
  • Privileged Identity Management
  • Privileged Password Management
  • Privileged Account Security

The commonality is that a shared framework controls the access of authorized users and other identities to elevated privileges across multiple systems deployed in an organization.

Special Requirement of Privileged Identities

A Privileged Identity Management technology needs to accommodate for the special needs of privileged accounts, including their provisioning and life cycle management, authentication, authorization, password management, auditing, and access controls.

Provisioning and life cycle management – handles the access permissions of a personal user to shared/generic privileged accounts based on roles and policies.

Note: built-in privileged accounts are not normally managed using an identity management system (privileged or otherwise), as these accounts are automatically created when an OS, database, etc. is first installed and decommissioned along with the system or device.


First use case — control authentication into the privileged accounts, for example by regularly changing their password.

Second use case — control authentication into a privileged access management system, from which a user or application may “check out” access to a privileged account.

Authorization — control what users and what applications are allowed access to which privileged accounts or elevated privileges.

First use case — pre-authorized access (“these users can use these accounts on these systems any time.”).
Second use case — one-time access (“these users can request access to these accounts on these systems, but such requests for short-term access must first be approved by …”).

Password Management — scheduled and event-triggered password changes and password complexity rules, all applying new password values to privileged accounts.

Auditing – both event logs (who accessed which account, when, etc.) and session capture (record/replay what happened during a login session to a given account?).

Access Controls – Control what a given user, connected to a given privileged account, on a given system, can do. Two design principles need to be balanced here: the principle of least privilege and a desire to minimize the need to develop and maintain complex access control rules.

Session Recording – The ability to record access to privileged accounts is vital both from a security and compliance perspective.

Session isolation – Controlling access to privileged accounts using a session proxy (or next generation jump server) can prevent issues such as pass-the-hash attacks and malware propagation.

Risks of Unmanaged Privileged Identities

Unmanaged privileged identities can be exploited by both insiders and external attackers. If they are not monitored, held accountable, and actively controlled, malicious insiders, including system administrators, can steal sensitive information or cause significant damage to systems.

A 2009 report prepared for a US congressional committee by Northrop Grumman Corporation details how US corporate and government networks are compromised by overseas attackers who exploit unsecured privileged identities. According to the report, “US government and private sector information, once unreachable or requiring years of expensive technological or human asset preparation to obtain, can now be accessed, inventoried, and stolen with comparative ease using computer network operations tools.”

The intruders profiled in the report combine zero-day vulnerabilities developed in-house with clever social exploits to gain access to individual computers inside targeted networks. Once a single computer is compromised, the attackers exploit “highly privileged administrative accounts” throughout the organization until the infrastructure is mapped and sensitive information can be extracted quickly enough to circumvent conventional safeguards.
Privileged account passwords that are secured by a privileged identity management framework so as to be cryptographically complex, frequently changed, and not shared among independent systems and applications offer a means to mitigate the threat to other computers that arises when a single system on a network is compromised.

Hacker, sercurity  concept, hacking bank or social media thief

Signs of falling victim to identity theft

Identity is a crucial element in most computer security mechanisms. Access controls depend on identifying the users or devices that are allowed to view or use resources and keeping others out. We’re asked to “prove” our identities every time we board a plane, check into a hotel, make a purchase via check or credit card, or log onto a computer or secure web site. But the standard of proof is often very low, and in the IT world, we seem to have a misconception about what identity really is – and isn’t.

Identity theft in which someone pretends to be someone else by assuming that person’s identity, usually as a method to gain access to resources or obtain credit and other benefits in that person’s name. The victim of identity theft (here meaning the person whose identity has been assumed by the identity thief) can suffer adverse consequences if they are held responsible for the perpetrator’s actions. Identity theft occurs when someone uses another’s personally identifying information, like their name, identifying number, or credit card number, without their permission, to commit fraud or other crimes.

The term identity theft was coined in 1964; however, it is not literally possible to steal an identity—less ambiguous terms are identity fraud or impersonation.

Determining the link between data breaches and identity theft is challenging, primarily because identity theft victims often do not know how their personal information was obtained,” and identity theft is not always detectable by the individual victims, according to a report done for the FTC. Identity fraud is often but not necessarily the consequence of identity theft. Someone can steal or misappropriate personal information without then committing identity theft using the information about every person, such as when a major data breach occurs. A US Government Accountability Office study determined that “most breaches have not resulted in detected incidents of identity theft”. The report also warned that “the full extent is unknown”. A later unpublished study by Carnegie Mellon University noted that “Most often, the causes of identity theft is not known,” but reported that someone else concluded that “the probability of becoming a victim to identity theft as a result of a data breach is … around only 2%”.

Techniques for obtaining and exploiting personal information for identity theft Identity thieves typically obtain and exploit personally identifiable information about individuals, or various credentials they use to authenticate themselves, in order to impersonate them. Examples include:

  1. Rummaging through rubbish for personal information (dumpster diving)
  2. Retrieving personal data from redundant IT equipment and storage media including PCs, servers, PDAs, mobile phones, USB memory sticks and hard drives that have been disposed of carelessly at public dump sites, given away or sold on without having been properly sanitized
  3. Using public records about individual citizens, published in official registers such as electoral rolls
  4. Stealing bank or credit cards, identification cards, passports, authentication tokens … typically by pickpocketing, housebreaking or mail theft
  5. Common-knowledge questioning schemes that offer account verification and compromise: “What’s your mother’s maiden name?”, “what was your first car model?”, or “What was your first pet’s name?”, etc.
  6. Skimming information from bank or credit cards using compromised or hand-held card readers, and creating clone cards
  7. Using ‘contactless’ credit card readers to acquire data wirelessly from RFID-enabled passports
  8. Observing users typing their login credentials, credit/calling card numbers etc. into IT equipment located in public places (shoulder surfing)
  9. Stealing personal information from computers using breaches in browser security or malware such as Trojan horse keystroke logging programs or other forms of spyware
  10. Hacking computer networks, systems and databases to obtain personal data, often in large quantities
  11. Exploiting breaches that result in the publication or more limited disclosure of personal information such as names, addresses, Social Security number or credit card numbers
  12. Advertising bogus job offers in order to accumulate resumes and applications typically disclosing applicants’ names, home and email addresses, telephone numbers and sometimes their banking details
  13. Exploiting insider access and abusing the rights of privileged IT users to access personal data on their employers’ systems
  14. Infiltrating organizations that store and process large amounts or particularly valuable personal information
  15. Impersonating trusted organizations in emails, SMS text messages, phone calls or other forms of communication in order to dupe victims into disclosing their personal information or login credentials, typically on a fake corporate website or data collection form (phishing)
  16. Brute-force attacking weak passwords and using inspired guesswork to compromise weak password reset questions
  17. Obtaining castings of fingers for falsifying fingerprint identification.
  18. Browsing social networking websites for personal details published by users, often using this information to appear more credible in subsequent social engineering activities
  19. Diverting victims’ email or post in order to obtain personal information and credentials such as credit cards, billing and bank/credit card statements, or to delay the discovery of new accounts and credit agreements opened by the identity thieves in the victims’ names
  20. Using false pretences to trick individuals, customer service representatives and help desk workers into disclosing personal information and login details or changing user passwords/access rights (pretexting)
  21. Stealing cheques (checks) to acquire banking information, including account numbers and bank routing numbers
  22. Guessing Social Security numbers by using information found on Internet social networks
  23. Low security/privacy protection on photos that are easily clickable and downloaded on social networking sites.
  24. Befriending strangers on social networks and taking advantage of their trust until private information are given.
Cloud computing concept

The Year of the Hybrid Cloud

About a decade ago, prevailing IT traditions dictated that businesses use on-premises infrastructure for their enterprise applications. However, a paradigm shift has occurred with the emergence of IaaS and on-demand cloud providers such as AWS, etc. It has given birth to a new virtualization strategy that most businesses have been quick to adopt – use servers provided by third-party cloud vendors, eliminate infrastructure limitations, develop and deploy applications on a single hardware. The fact that businesses can do all this and more without spending millions of dollars on IT systems is proving to be the proverbial icing on the cake.

The evolution of IaaS and SaaS has opened up a world of opportunities for smaller businesses and startups where they now have the ability to explore and deploy their ideas with greater agility. This has not only helped cut costs but has also improved the time to market. However, it also poses a new challenge for the enterprise.

Applications are becoming increasingly hybrid in nature. The highly critical ones run on premises while the less critical ones are deployed on the cloud. The on-premises infrastructure resources are then linked to the cloud through VPN or VPC, resulting in a hybrid cloud that offers a high degree of flexibility and adaptability.
Enterprises find the hybrid cloud very attractive as it allows them to achieve a lot while staying within their budget. In addition, by restricting the highly critical applications to on-premises infrastructure, they don’t have to compromise on any of their security and compliance requirements. The challenge enterprises now face is a crucial one: how to manage hybrid resources in the most efficient manner possible?

With more and more businesses jumping onto the hybrid cloud bandwagon, technology and services companies can expect to see significant changes in the way businesses manage their hybrid resources.

Here are 5 crucial developments in the hybrid cloud arena that we think 2014 is likely to bring:

1. Hybrid Assets Management System
The fact that different IaaS providers have different strengths means enterprises will have a presence on more than one cloud platform in order to make the most of their applications. Multiple cloud platforms combined with on-premises infrastructure will result in a highly complex hybrid cloud. Web-based SaaS applications, such as NetSuite, Salesforce, Workday, etc., will only add to the complexity. The need of the hour is a unified system that will help manage scattered assets and address other needs like configuration management, risk, compliance, etc. We predict that this year, there will be a flood of products and services that are designed to simplify assets management and life on the hybrid cloud.

2. Hybrid Cloud Migration Tools
In today’s hybrid cloud environment, it is often necessary to move application resources across various cloud platforms – from private cloud to public, from public to private, or public to public. In a virtualized world, it is extremely difficult to shift resources around without the right tools. This year, we can expect to see various tools that provide the required support and facilitate resource movement across various cloud platforms.

3. Hybrid Cloud Security, Governance, Risk and Compliance
Hybrid cloud adds a whole new dimension to security, governance, risk and compliance. With on-premises infrastructure, businesses have it easy because the requirements are clearly defined as per internal policies. However, in a hybrid cloud environment, businesses have to work with different service providers and different SLAs. In addition, moving data across various cloud resources raises the question of compliance. In 2014, IT companies with a focus on these areas might have to deal with an increased demand for security and GRC management in a hybrid cloud environment. Expect to see more cloud service brokerage companies specializing in security and GRC!

4. Hybrid Cloud Backup, DR and Archival Strategy
Although cloud backup, disaster recovery (DR) and archiving have existed for a while now, these segments will show drastic growth in the coming year. Backup and DR are used by companies mostly to meet compliance requirements. Businesses are rarely required to retrieve their data from backup or deploy the DR site for continuity purposes. Having an elaborate backup, DR and archiving setup that is seldom used can prove to be quite expensive in the long term. Many companies have started questioning the value of having an expensive on-premises backup and DR setup for a crisis that may or may not happen in the company’s lifetime. However, completely doing away with such a setup is a risk that companies are not willing to take. This is where the public cloud comes in. A public cloud backup, DR and archiving setup costs only 20% of an on-premises setup. And yet, it offers everything an on-premises setup has to offer. In 2014, we are likely to see more and more companies adopting a hybrid cloud backup, DR and archival strategy.

5. Reality Of Private Cloud In The Era Of Public Cloud
The commoditization of hardware and the level of control that private cloud affords are the two main reasons why medium to large businesses have been quick to adopt it as an on-premises infrastructure. However, the cost of maintenance and the resources that need to be dedicated to managing this setup are financial pain points that businesses have been unable to ignore. Now, with the rise of the public cloud, businesses, especially medium and large enterprises, have begun to question the wisdom of opting for a private cloud. The pricing structure, the ease of use, and the built-in maintenance and service features of the public cloud have resulted in more and more businesses making the move to the public cloud platform. Today, in a hybrid cloud setup, it is safe to say that the balance is in favor of private cloud. But as companies tentatively step onto the public cloud platform and find that it’s safe as well as economical, we can expect this balance to shift in favor of the public cloud. All in all, 2014 will be a testing time for the growth of private cloud in a hybrid cloud environment.

Data Storage

Tracking Your EAaaS Payback Software Metrics

Although this series is about the payback of migrating to the cloud, not every aspect of moving to the cloud will save money standing alone. In certain situations, the software costs can be greater in the cloud. The main drivers of increased software costs in a cloud environment are the virtualization software and the service management software.

Each consolidated system will require a license for virtualization software and additional service management software. These costs are partially offset by the reduction in the number of operating system licenses, due to the fewer number of systems.

As shown in the chart above, the software costs generally increase but the overall percent increase is typically a small single digit percentage. When compared to the overall savings achieved in other areas, this software cost is not significant.

For example, in the case of a medium-sized banking customer, the client already had a virtualized environment established, so they would be able to further reduce the number of virtualization licenses when they move to a cloud environment, therefore their software costs were reduced. The other two customers did not have virtualized environments at the start, so their software costs increased.

Software Metrics and You
You know that transitioning to the Cloud can directly affect the bottom line of your IT costs. Every blog post, every article, every hashtag surrounding the Cloud says “it’s cheaper, it’s cheaper, #itscheaper.” What most articles won’t admit is that certain metrics could be negatively impacted in a cloud migration strategy. This 5-part series explaining payback includes the possibility that your software costs could increase, but this potential setback is much smaller than the prospective savings overall. Do you want to learn more about the software metrics of your EAaaS payback? Download the white paper here.

Analytics and programming vector. Web application optimization.

Tracking Your EAaaS Payback Hardware Metrics

Although enterprise applications running on the public cloud are still in the early adopter phase, IT Executives are under pressure to develop a cloud strategy for their enterprise applications. It’s up to them to untangle the cloud “spaghetti” and comprehend the basic economics and capabilities. To effectively start tracking Cloud Payback, first examine the hardware metrics.

Server Depreciation
There are two main areas of hardware payback. The first is physical server depreciation. The hardware savings come from improving server utilization and decreasing the number of servers. The typical server in a datacenter is running a single application and is being utilized from 5% to 25% of its capacity. As systems are consolidated and virtualized in a cloud environment, the number of servers required can drop significantly and the utilization of each server can be greatly increased, resulting in significant savings in hardware costs and the avoidance of future capital investment.

The hardware metrics can be classified as:

  • Number of existing physical servers
  • Average total purchase price per server
  • Average percent of hardware utilization

If many infrastructure(s) and applications are moved to an external cloud service provider, there may be a decrease in assets, which will have an impact on depreciation charges.

Energy and Facilities Costs
The second area of hardware payback is comprised of energy and facilities costs. If there are fewer servers using energy and floor space, your company will see direct savings. Based on a study conducted by IBM, typical savings in total hardware, energy, and facilities can be in the range of 30% to 70%, based on your current size and annual spending. The cloud computing platform can also affect the size of the cost savings.

Hardware Metrics and You
You know that transitioning to the Cloud can directly affect the bottom line of your IT costs. Every blog post, every article, every hashtag surrounding the Cloud says “it’s cheaper, it’s cheaper, #itscheaper.” We believe that there is a systematic formula for how the Cloud can save your company money, and that formula starts with hardware metrics. Do you want to learn more about the hardware metrics of your EAaaS payback? Download the white paper here.

Cloud computing devices

The Truth Behind The TCO

Why EAaaS is More Advantageous than Premises-Based Applications?

If you’re reading this, it’s likely that you’re a business process or IT applications professional. Or you’re an innocent bystander interested in Enterprise Applications. Either way, you must be wondering how EAaaS (Enterprise Applications as a Service) can provide a better TCO than traditional infrastructure or private cloud alternatives. To skip this blog post and read the full paper, download it here!

No matter how the application is deployed, the needs of enterprise customers remain the same. So of course, both cloud and premises-based solutions address the customers’ requests. However, a vast difference arises between EAaaS and premises-based application infrastructures. Premises-based enterprise applications include tightly integrated solutions, long-term commitments, client-owned assets, and few global suppliers. These detrimental aspects create a higher TCO (total costs of ownership) for you. Sure, these expensive premises-based solutions may have been the only option in the past, but now jumping into the cloud to create test environments and backup instances, such as Oracle or SAP based enterprise applications.

Before you plug in the numbers for find your TCO, it’s critical to understand what to measure when assessing an EAaaS offering. EAaaS lowers TCO in five key areas: hardware, software, automated provisioning, productivity, and administration. In one study, each area ranged from 5-40% savings when companies switched to Enterprise Applications as a Service. There is no doubt that the TCO of EAaaS is lower. Maybe you’re one of those people that needs to touch, see, and hey, smell, before you buy.Go ahead; get the assessment. What you find may surprise you!


What is EAaaS? Clearing Up the Cloud Confusion

What does the term “cloud” mean? This terminology has become a marketing buzzword, being constantly used by salespeople to attract attention, but the true meaning behind the word is rarely understood.

The term “cloud” is used to describe the supply of technology as a service, rather than a product. Since “cloud” can be such a broad term, we will examine how it relates to one entity: Enterprise Applications. Today, many of these types of applications are available, such as Workday, NetSuite, QuickBooks, and Oracle On Demand. Instead of manufacturing or maintaining in-house IT infrastructures, companies can buy what software they need as a service. Now, companies do not need to invest millions of dollars in hardware or IT resources, but rather they can just pay operational expenses. Small to mid-size companies especially see the value in this; they get all the advantages of a large corporation without the exorbitant cost.


The SaaS approach, however, is not without its challenges. Although this solution will satisfy your present needs, it does not offer as many options as in-house hardware. The EAaaS approach (Enterprise Application as a Service) provides a greater level of customization. This service is bundled with all kinds of network, storage, systems, DBA, and administration support. EAaaS brings in a new world of opportunity, which allows both customers and database administrators to transition from legacy to cloud.


Private Versus Public Cloud: A Comparison

You can get the same results using public cloud hosted IT. Cost savings occur by eliminating the upkeep of physical servers – public cloud hosting is a pay-as-you-go service where you have the flexibility to scale when needed. Your data is securely stored in the cloud and your applications are run through virtual instances also hosted in the cloud.


Aside from the cost savings, public cloud hosting is ideal for hosting non-production, backup, and data recovery environments. Cloud hosting provides a level of mobility that private hosting cannot match. You have convenient access to your data on any platform, laptop, mobile device, tablet, or PC. So whether you’re on a client site, in the office, or just sitting at home, you will have secure access to the information you need.

1 2