-
install apache hadoop on redhat linux
install apache hadoop on redhat linux
= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =========> Download Link install apache hadoop on redhat linux = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
Apache Hadoop is an Open Source framework build for distributed Big Data storage and processing data across computer clusters.. This article will guide you on how you can install Apache Hadoop on a single node cluster in CentOS 7 (also works for RHEL 7 and Fedora 23+ versions). If your cluster doesn't have the requisite software you will need to install it. For example on Ubuntu Linux: $ sudo apt-get install ssh $ sudo apt-get install rsync. If your cluster doesn't have the requisite software you will need to install it. For example on Ubuntu Linux: $ sudo apt-get install ssh $ sudo apt-get install pdsh. Apache Hadoop 2.6.5 noticeable improvements over the previous stable 2.X.Y releases. This version has many improvements in HDFS and MapReduce. This how-to guide will help you to install Hadoop 2.6 on CentOS/RHEL 7/6/5, Ubuntu and other Debian-based operating system. This article doesn't. Procedure for installing Hadoop installation on Redhat 5.2 Enterprise Version. Download. Hadoop installation on Red Hat Linux – Multi Node Hadoop Cluster. Welcome to.. Our setup will use Hadoop's Distributed File System, HDFS, even though our little “cluster” only contains our single local machine. Apache Hadoop is an open source Big Data processing tool, widely used in the IT industry. Depending to the size, type, and scale of your data, you can deploy Hadoop in the stand-alone or cluster mode. In this beginner-focused tutorial, we will install Hadoop in the stand-alone mode on a CentOS 7 server. Linux, android, bsd, unix, distro, distros, distributions, ubuntu, debian, suse, opensuse, fedora, red hat, centos, mageia, knoppix, gentoo, freebsd, openbsd. You will only need a VPS or server running a CentOS 7.x minimal installation with root access on it to install Apache Hadoop as there are no specific minimum. --header "Cookie:oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u91-b14/jdk-8u91-linux-x64.rpm". HDFS or "Hadoop Distributed File System" is a distributed file-system that stores data on commodity machines, providing very high aggregate bandwidth across the cluster. HDFS was inspired in the Google File System, HDFS stores large files (typically in the range of gigabytes to terabytes) across multiple. Apache Hadoop is absolutely free and open-source software module for remote-access data processing of huge Data sets across clusters. In other word, Hadoop is a set of rule (frameworks) that permit storing huge amounts of data (large data or Big Data), and processing it in a much more businesslike. If you have registered your machine using Red Hat Subscription Manager, enable the channel by running the following command: # subscription-manager repos --enable=rhel-6-server-rhs-client-1-rpms. If you have registered your machine using Satellite server, enable the channel by running the following command:. Given below is an example of running the setup_cluster.sh script on the Ambari Management server and four Red Hat Storage Nodes which have the same logical volume and mount point intended to be used as a Red Hat Storage brick. ./setup_cluster.sh --yarn-master yarn.hdp rhs-1.hdp:/mnt/brick1:/dev/rhs_vg1/rhs_lv1. Apache Hadoop 2.4.0 significant improvements over the previous stable releases. This version has many improvements in HDFS and MapReduce. This how to guide will help you to install Hadoop 2.4.0 on CentOS 6.5 System. This article doesn't includes overall configuration of hadoop, we have only. Install Hadoop 2.X for RHEL 7.X Linux The Tutorial Shows You Step-by-Step How-to Install and Getting-Started with Apache Hadoop/Map-Reduce 2.X vanilla in Pseudo-Distributed mode on Red Hat Enterprise Linux 7 32/64-bit Desktop/Server. Apache Hadoop requires Java version 8 and above. So, you can choose to install either OpenJDK or Oracle JDK. Here, for this demo, I will be installing OpenJDK 8. ### Debian 9 / Ubuntu 16.04 ### apt-get -y install openjdk-8-jdk wget ### CentOS 7 / RHEL 7 ### yum -y install java-1.8.0-openjdk wget. 15 min - Uploaded by Nguyễn Hữu HỷHow to Setup Hadoop 2.6.0 (Single Node Cluster) on CentOS/RHEL and Ubuntu http. Anyway, resolving errors generated by Hadoop should make you understand this environment much better.. Let's assume that we have a fresh CentOS install... Check out with jps if DataNode are running on slaves and if DataNode , NameNode , and SecondaryNameNode are running on master. Installing Apache Hadoop. After setting up the java environment. Let stat installing Apache Hadoop. The first step is to create a system user account to use for hadoop installation. useradd hadoop passwd hadoop. Now you need to configure the ssh keys for the user hadoop. Using following command to. Install the Hadoop Packages. Execute the following command on all cluster nodes. For RHEL/CentOS/Oracle Linux: yum install hadoop hadoop-hdfs hadoop-libhdfs hadoop-yarn hadoop-mapreduce hadoop-client openssl. For SLES: zypper install hadoop hadoop-hdfs hadoop-libhdfs hadoop-yarn hadoop- mapreduce. Starting Hadoop and Verifying it is Working Properly. For YARN, a pseudo-distributed Hadoop installation consists of one node running all five Hadoop daemons: namenode , secondarynamenode , resourcemanager , datanode , and nodemanager . To view the files on Red Hat or SLES systems: $ rpm -ql hadoop-conf-. Hadoop is a framework written in Java for running applications on large clusters of commodity hardware and incorporates features similar to those of the Google File System (GFS) and of the MapReduce computing paradigm. Hadoop's HDFS is a highly fault-tolerant distributed file system and, like Hadoop in general,. You can follow these docs: http://docs.hortonworks.com/HDPDocuments/Ambari-1.6.1.0/bk_using_Ambari_book/content/ambari-chap1.html. Specifically, installing Ambari and the Stack using Local Repositories is covered here:. Install and Configure Apache Hadoop on CentOS/RHEL 7/6/5 and Ubuntu Apache Hadoop is an open-source software framework for distributed storage and distributed processing of very large data sets on computer clusters built from commodity hardware. All the modules in Hadoop are designed with a. Installation of Hadoop on your Centos 6/RHEL box is now a lot simpler since rpm versions have been made available but you nonetheless need to have installed the JDK prior to. If you have any problems with yum you can also use the Apache mirror service, download your preference and install it with. There are some particular considerations to build Hadoop Common (Common/Mapred/HDFS) on ppc64 with Linux. Most of those. IBM PowerLinux 7R2 - (Used one PowerVM virtualized partition - lpar); Red Hat Linux Enterprise (RHEL 6.2); Hadoop Common source version 1.0.3. To install all dependencies, run as root: Cross checks the status of master and slave namenode using below hdfs command. [hduser@mn1~]$hdfs haadmin -getServiceState mn1 [hduser@mn1~]$hdfs haadmin -getServiceState mn2 In our installation guide we are going to show you Failover test using two methods. Posted in Database, Hadoop, Redhat Linux. To understand the apache hadoop installation on 32 bit centos, follows the simple steps:. To process hadoop installation you need to create user name and password. #rpm –qa |grep jdk #ls /opt/ #rpm –Uvh /opt/jdk-7u45-linux-i586.rpm #ls #java –version #ls /usr/java/jdk1.7.0_45 #su – hadoop. ii. Hadoop is written in Java, so you will need to have Java installed on your machine, version 6 or later. Sun's JDK is the one most widely used with Hadoop, although others have been reported to work. Hadoop runs on Unix and on Windows. Linux is the only supported production platform, but other flavors of. If you want vendor support of Hadoop - and speaking soley for Cloudera at this point - CDH4 is supported on CentOS/RHEL/Oracle 5 and 6, SLES 11, Ubuntu. If you are planning to use Hadoop for Learning, R & D, Development, testing, etc. you can install it on Ubuntu.. Shweta Bharti, Neophyte in the field of HDFS. This is more or less like some reference notes for installation. I made a note of this when I was installing Hadoop on my system for the very first time. Please let me know if you need any specific details. Installing HDFS (Hadoop Distributed File System) OS : Linux Mint (Ubuntu). Installing Sun Java on Linux. Introduction Apache Hadoop is an open-source software framework that supports data-intensive distributed applications, licensed under the Apache v2. It supports the running of applications on large clusters of commodity hardware.. Yum install will work with REDHAT or CentOS linux distributions. Learn how to install Hadoop on CentOS, Cloudera installation on CentOS, hadoop installation steps, Configure Hadoop environment variables and related topics. Introduction. One of the best known technology used for Big Data is Hadoop. Its an open source suite, under an apache foundation: http://hadoop.apache.org/. The core of hadoop is Map-Reduce framework. MapReduce is a software framework that allows developers to write programs that process massive. So you end up running MapReduce jobs on top of Gluster, not HDFS, and they have some very different properties. Red Hat Storage Server runs on Linux-based x86 servers with SAS or SATA drives and you can RAID protect those drives (or not) as you see fit. The clustered file system can ride ext3, ext4,. Step 2: Set JAVA_HOME. For Hadoop 2, the recommended version of Java can be found at http://wiki.apache.org/hadoop/HadoopJavaVersions. In general, a Java Development Kit 1.6 (or greater) should work. For this install, we will use Open Java 1.6.0_24, which is part of Red Hat Enterprise Linux 6.3. The report here will describe the required steps for setting up a single-node Hadoop cluster backed by the Hadoop Distributed File System, running on Ubuntu Linux. Hadoop is a framework written in Java for running applications on large clusters of commodity hardware and incorporates features similar to those of the. Step 1: Install JAVA and Hadoop. Apache Hive required java 6 or later version. We also need to install Hadoop first before installing apache hive on our system. Step 2: Download Hive Archive. After configuring Hadoop successfully on your Linux system. lets start hive setup. First download latest hive. Hadoop is supported by GNU/Linux platform and its flavors. Therefore, we have. Before installing Hadoop into the Linux environment, we need to set up Linux using ssh (Secure Shell). Follow the.. Standalone mode is suitable for running MapReduce programs during development, since it is easy to test and debug them. Hadoop can be installed on the following operating systems. Red Hat Enterprise Linux (RHEL). Hive : By default (on RHEL/CentOS/Oracle Linux 6), Ambari will install an instance of MySQL on the Hive Metastore host. Oozie : By default. Apache recommends that you have atleast 1 GB of ram available. Hosts Preparation:. Check the document “Hortonworks Data Platform : Automated Install with Ambari” for more prerequisites. There is no single hardware requirement set for installing Hadoop. Hadoop can be installed on the following operating systems. Red Hat Enterprise Linux (RHEL) v6.x; Red Hat Enterprise Linux (RHEL). Spark can work with data from various sources, AWS S3, HDFS, Cassandra, Hive (structured data), HBase, or any other Hadoop data source. Above all what. Step 1: Install Java. There's a separate blog post – install Java 8 on CentOS/RHEL 7.x. If you already have it installed, you van verify athe version. In this post, I will talk about Apache Pig installation on Linux. Let's start off with the basic definition of Apache Pig and Pig Latin. Apache Pig is a tool/platform for creating and executing Map Reduce program used with Hadoop. It is a tool/platform for analyzing large sets of data. You can say, Apache Pig is an. This is a guide to installing the Tessera stack consisting of Hadoop, RHIPE, datadr, trelliscope and other supporting packages on a multi-node cluster running 64-bit Red Hat Enterprise Linux version 6.x. These instructions should also work with few modifications on CentOS 6. An Ubuntu version of this. Install the hadoop rpms with: yum install hadoop-common hadoop-common-native hadoop-hdfs hadoop-mapreduce hadoop-mapreduce-examples hadoop-yarn. Initialize the HDFS directories: hdfs-create-dirs. Start the cluster by issuing: systemctl start hadoop-namenode hadoop-datanode hadoop-nodemanager hadoop-. http://hadoop.apache.org/mapreduce/releases.html. 2) Untar the hadoop file: tar xvfz hadoop-0.20.2.tar.gz. 3) Set the path to java compiler by editing JAVA_HOME parameter in hadoop/conf/hadoop-‐env.sh: • Mac OS users can use. /System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/Home. • Linux users can. Installing the MapR Client on CentOS, RedHat, Oracle Linux. This section describes how to install the MapR Client on CentOS, RedHat, Oracle Linux. Remove any previous MapR software. You can use rpm -qa | grep mapr to get a list of installed MapR packages, then type the packages separated by spaces after the rpm -e. This tutorial will follow step by step process of installing Hadoop using on your computer.. b->Click on “Browse” button and select the “rhel-server-6.0-i386-dvd” file from the window Location ,where this “rhel-server-6.0-i386-dvd” file located.then Click on “Open”.. 3->hdfs-site.xml(File name on conf folder). In the VirtualBox create a new virtual machine, set the type to be “Linux” and version to be “Redhat x64″, set up 8192 MB of memory and a disk of size.. Running Hadoop. If you enter the Ambari site (http://localhost:8080) you will see this: In the case if you reboot your machine, you will have to start all the. The main goal of this tutorial is to get a more sophisticated Hadoop installation up and running, namely building a multi-node cluster using two CentOS boxes. This tutorial. step installing and configuring Hadoop Multi-Node Cluster on CentOS/RHEL 6.. Steps to install JAVA on CentOS 5/6 or RHEL 5/6 Hadoop as a production platform is supported on Linux but Windows and other flavors of UNIX such as Mac OS X can be used as a development environment. Red Hat Enterprise Linux (RHEL), Ubuntu, CentOS, Fedora, Debian and OpenSuse take up the major share of commercially deployed Linux. 1. Check that `rmr2`, and its dependent packages are installed on each node in the cluster. 1. Make sure that a link to Rscript executable is in the PATH on each node in the Hadoop cluster. 1. The user that invoked 'R' has read and write permissions to HDFS. 1. Verify that the `HADOOP_CMD` environment. Dell EMC Ready Bundle for Red Hat OpenStack Platform - Hadoop Solution Deployment Guide - Version 10.0.1. Contents. List of Figures.... All server, network, and storage hardware needed to install and operate the solution as outlined.. Cloudera Distribution for Apache Hadoop (CDH) is a commercially- supported and. Intel Distribution for Apache Hadoop (ID). RHEL. Software Versions. Fabric Configuration. Performing Initial Setup of Cisco UCS 6296 Fabric Interconnects. Installing httpd. Enabling Syslog. Setting Ulimit. Disabling SELinux. Setting TCP Retries. Disabling the Linux Firewall. Configuring Data Drives on NameNode. You can install Machine Learning Server on open-source Apache Hadoop from http://hadoop.apache.org but we can only offer support for commercial distributions. Operating Systems: Red Hat Enterprise Linux 6.x and 7.x, SUSE Linux Enterprise Server 11 1, Ubuntu 14.04 and 16.04. Spark 2.0 and 2.4. 1. Go to the Apache download archive site: http://archive.apache.org/dist/hadoop/core/. 2. Select the correct tar file for your version of Apache Hadoop. For example, version 1.0.3: http://archive.apache.org/dist/hadoop/core/hadoop-1.0.3/hadoop-1.0.3.tar.gz. Install Java; We will add a user hduser that we will treat as hadoop administrator and use to install all hadoop and related components. Later, on another machine (edge node) we will also add another user hdclient who will access the Apache Hadoop cluster for running map-reduce jobs. Install Apache Hadoop. Update env. Background The following post outlines a setup and configuration of a “personal hadoop” development environment that is much akin to a & …. http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/ · http://wiki.apache.org/hadoop/. [root@hadoop ~]# tar -xzf jdk-8u45-linux-x64.tar.gz -C /opt/ [root@hadoop ~]# alternatives --install /usr/bin/java java /opt/jdk1.8.0_45/bin/java 2 [root@hadoop ~]# alternatives --config. We will also need Apache Maven (build automation tool) and Protocol Buffers (serialization library developed by Google). This tutorial is a step-by-step guide to install Apache Spark. Installation of JAVA 8 for JVM. tar xzf jdk-8u51-linux-x64.tar.gz. 5. Update the available. Spark on the machine. We would be configuring Spark to run in standalone mode, hence we would download prebuilt binary of Spark which is precompiled against Hadoop. Cloudera's Distribution Including Apache Hadoop (CDH), version CDH5.4 and higher; Hortonworks Distribution for Apache Hadoop, version 2.3 and higher. higher; Red Hat Enterprise Linux x86, version 4.0, 5.0, 6.0, 7.0 and higher; SUSE Linux Enterprise Server Linux x86, version 10, 11, 12 and higher; Ubuntu Linux x86,. This tutorial describes how to build a distributed Apache Hadoop multi-node cluster on four nodes running Ubuntu Server 14.04.1 LTS. In Red Hat's case, it's building support for Hadoop into Red Hat Enterprise Linux as tightly as it's built in support for OpenStack. Now both companies have announced an alliance that aims to make Hadoop deployable across both private and public clouds. Billed as "Enterprise Apache Hadoop for open. For example, the following articles provide information about upgrading Red Hat: How do I upgrade from Red Hat Enterprise Linux 6 to Red Hat Enterprise Linux 7? Does Red Hat support upgrades between major versions of Red Hat Enterprise Linux? After you confirm that you can perform the upgrade, follow the steps at. The report here will describe the required steps for setting up a single-node Hadoop cluster backed by the Hadoop. Distributed File System, running on Ubuntu Linux. Hadoop is a framework written in Java for running applications on large clusters of commodity hardware and incorporates features similar to. Running HDFS and MapReduce on a single machine is great for learning about these systems, but to do useful work we need to run Hadoop on multiple nodes. There are a few options when it comes to. As long as you have access to few spare linux machines - you are fine. But in case you don't - it's. Spark can run as a standalone cluster manager, or by taking advantage of dedicated cluster management frameworks like Apache Hadoop YARN or Apache. Follow our guide on how to install and configure a three-node Hadoop cluster to set up your YARN cluster.. For RedHat/Fedora/CentOS systems:. In this article, we set up a Hadoop cluster on Azure using virtual machines running Linux. More specifically, we use the HDP 2.1 on Linux distribution by Hortonworks that also provides the HDP distributions for the Windows platform. Furthermore, we install Hadoop with Ambari, an Apache project that. How to install Ambari ? Ambari installation is easy a task of few commands. We will cover Ambari installation and cluster setup. We are assumed to be having 4 nodes. Node1, Node2, Node3 and Node4. And we are picking Node1 as our Ambari server. These are installation steps on the RHEL based. The IBM Solutions Operating Environment hosts buildable Hadoop source trees optimized for Linux running on POWER. These trees support both RedHat Enterprise Linux (RHEL) v6.5 or later on big- endian with PowerVM and Ubuntu v14.04 on little-endian with PowerKVM. It also works with openSUSE. We have hadoop 2. 7. 5 in all nodes that are running on CentOS 6. 6 or 2. master. 8. com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/ Dec 3, 2017 In this blogpost I will show you guys. 04 or later (you can also use other Linux flavors like CentOS, Redhat, etc. xml: dfs. How to: 1. Download VMWare player* or Oracle Virtual Box. 2. Download CentOS 7 ISO image** or any other distro based on RHEL. 3. Install VM software. 4. Install the ISO image. 5. Launch Installed VM. 6. Open Terminal. 7. Switch to root user. 8. Execute the following: # sudo su –. # sudo yum update. 9. 5. 6. 7. 8. 9. 10. 11. 12. 13. su -. apt-get install openjdk-7-jdk. sudo addgroup hadoop. sudo adduser --ingroup hadoop hduser. sudo adduser hduser sudo. sudo reboot. // -> Login with: hduser. ssh-keygen -t rsa -P ''. // -> Enter. cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys. ssh localhost. // -> Yes. // -> Exit. (next article: How To Install Apache Hadoop Pseudo Distributed Mode on a Single Node) The standlone hadoop environment is a. java -version java version "1.6.0_20" OpenJDK Runtime Environment (IcedTea6 1.9.7) (rhel-1.39.1.9.7.el6-x86_64) OpenJDK 64-Bit Server VM (build 19.0-b09, mixed mode). Today we are going to get our hands dirty and install a single instance standalong Hadoop Cluster on the Oracle Compute Cloud. This is a continuing series. Oracle does not provide this as part of the yum repository and Apache does not create one for Oracle Linux or RedHat. We are going to download. Apache - BigData Hadoop Training in Jaipur. FB page:- LinuxWorld India. This series will get you up to speed on Big Data and Hadoop. Topics include how to install, configure and manage a single and multi-node Hadoop cluster, configure and manage HDFS, write MapReduce jobs and work with many of the projects. Hadoop 3.x is the latest release of Hadoop which is still in alpha phase. Developers who are interested in Hadoop can install the product and report to Apache if they found any issues or bugs. There are many new features that are introduced in Hadoop 3.x. In this blog, we will be discussing about how to. For checking the flavour of linux - cat /etc/*-release. 3.Check if we have postgresql package exist. yum search postgres. Steps to follow the Installation: 1. Login as “ROOT” for Installing hadoop cluster. 2. Check the below commands. 1. cat /etc/redhat-release. 2. Check for packages installed (Here if you. Welcome. First things first. The best resource to get started on making a single node and multinode cluster is Micheal G Noll's excellent 2-part series. http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/. First lets start with Hive Running Hive Hive uses Hadoop, so: you must have Hadoop in your path ORexport HADOOP_HOME=hadoop-install-dir> In addition, you... Apache Hive is considered the defacto standard for interactive SQL queries over petabytes of data in Hadoop. Hadoop was built to organize and store massive amounts of data of all shapes, sizes and formats. Because of Hadoop's "schema on read" architecture, a Hadoop cluster is a perfect reservoir of heterogeneous data. Cloudera CDH and Cloudera Manager are great tools, but I wanted to use Apache Ambari to deploy and manage Kafka and Storm with Hadoop; for these reasons, HDP 2.2 was selected for the project.. I'm running on RHEL 6, which is Yum-based, so here are the prerequisite packages I needed to install:. Install using Yum repositories on RHEL, CentOS, and Oracle Linux. hduser@ipc:/usr/local/hadoop/sbin$ hadoop version. Hadoop 2.4.1. Subversion http://svn.apache.org/repos/asf/hadoop/common -r 1604318. Compiled by jenkins on. It should work. Remember that you need Java installed and set its path in conf/hadoop-env.sh (this is part of installing Hadoop). In this post we are going to install a single-node cluster of Hadoop for Pseudo-Distributed Operation. We will use YARN for mapreduce.. wget http://apache.cs.utah.edu/hadoop/common/stable/hadoop-2.8.2.tar.gz # tar -xzvf hadoop-2.8.2.tar.gz # cd hadoop-2.8.2/. We need to make some additions to our. hdfs.init() 14/12/09 05:20:49 ERROR security.UserGroupInformation: Unable to find JAAS classes:com.sun.security.auth.UnixPrincipal not found in gnu.gcj.runtime.SystemClassLoader{urls=[file:/home/cloudera/R/x86_64-redhat-linux-gnu-library/3.1/rJava/java/boot/], parent=gnu.gcj.runtime. Cassandra Installation and Configuration lesson provides you with in-depth tutorial online as a part of Apache Cassandra course.. virtual machine. Red Hat Enterprise Linux, referred to as RHEL, CentOS, a free version of RHEL, and Debian systems. In addition, you can also choose to install Cassandra on Windows 7 or 8. RED HAT ENTERPRISE LINUX VER 6.1 .. Myrna on Intel® Distribution of Hadoop (IDH) running on Cent OS 6.3 operating system. 2.2.. Software developers and technologists can use this document to install Myrna software. 2.4.Terminology. Term. Description. IDH. Intel® Distribution of Apache Hadoop Software. HDFS. Get started with the documentation for Elasticsearch, Kibana, Logstash, Beats, X-Pack, Elastic Cloud, Elasticsearch for Apache Hadoop, and our language clients.. It can be used to install Elasticsearch on any RPM-based system such as OpenSuSE, SLES, Centos, Red Hat, and Oracle Enterprise. Note. This entry was posted in Pig and tagged apache pig installation guide hadoop pig install ubuntu hadoop pig installation on ubuntu hadoop pig local file hadoop pig local mode hadoop pig mapreduce mode how to install pig in ubuntu install hadoop pig ubuntu install pig hadoop ubuntu installing pig on. If the Hadoop distribution you want to use requires the use of a 64 bit CentOS 6.x or Red Hat Enterprise Linux (RHEL) 6.x operating system, you must use a 64 bit CentOS 6.x. For example, Apache Lighttpd.. Mirror the remote yum repository to the local machine by running the mirror_cmds for your distribution packages.
-
Commentaires