Minggu, 08 Juni 2008


Kamis, 2008 Juni 05

Transaction Process System part II
Definition: A Transaction Processing System (TPS) is a type of information system that collects, stores, modifies and retrieves the data transactions of an enterprise.A transaction is any event that passes the ACID test in which data is generated or modified before storage in an information system Features of Transaction Processing Systems The success of commercial enterprises depends on the reliable processing of transactions to ensure that customer orders are met on time, and that partners and suppliers are paid and can make payment.The field of transaction processing, therefore, has become a vital part of effective business management, led by such organisations as the Association for Work Process Improvement and the Transaction Processing Performance Council.Transaction processing systems offer enterprises the means to rapidly process transactions to ensure the smooth flow of data and the progression of processes throughout the enterprise. Typically, a TPS will exhibit the following characteristics:Rapid ProcessingThe rapid processing of transactions is vital to the success of any enterprise – now more than ever, in the face of advancing technology and customer demand for immediate action.TPS systems are designed to process transactions virtually instantly to ensure that customer data is available to the processes that require it.Reliability Similarly, customers will not tolerate mistakes. TPS systems must be designed to ensure that not only do transactions never slip past the net, but that the systems themselves remain operational permanently.TPS systems are therefore designed to incorporate comprehensive safeguards and disaster recovery systems. These measures keep the failure rate well within tolerance levels.Standardisation Transactions must be processed in the same way each time to maximise efficiency. To ensure this, TPS interfaces are designed to acquire identical data for each transaction, regardless of the customer.Controlled Access Since TPS systems can be such a powerful business tool, access must be restricted to only those employees who require their use. Restricted access to the system ensures that employees who lack the skills and ability to control it cannot influence the transaction process.Transactions Processing Qualifiers In order to qualify as a TPS, transactions made by the system must pass the ACID test. The ACID tests refers to the following four prerequisitesAtomicity means that a transaction is either completed in full or not at all. For example, if funds are transferred from one account to another, this only counts as a bone fide transaction if both the withdrawal and deposit take place. If one account is debited and the other is not credited, it does not qualify as a transaction. TPS systems ensure that transactions take place in their entirety.ConsistencyTPS systems exist within a set of operating rules (or integrity constraints). If an integrity constraint states that all transactions in a database must have a positive value, any transaction with a negative value would be refused.IsolationTransactions must appear to take place in isolation. For example, when a fund transfer is made between two accounts the debiting of one and the crediting of another must appear to take place simultaneously. The funds cannot be credited to an account before they are debited from another.Durability Once transactions are completed they cannot be undone.To ensure that this is the case even if the TPS suffers failure, a log will be created to document all completed transactions. These four conditions ensure that TPS systems carry out their transactions in a methodical, standardised and reliable manner.Types of TransactionsWhile the transaction process must be standardised to maximise efficiency, every enterprise requires a tailored transaction process that aligns with its business strategies and processes.For this reason, there are two broad types of transactionBatch ProcessingBatch processing is a resource-saving transaction type that stores data for processing at pre-defined times. Batch processing is useful for enterprises that need to process large amounts of data using limited resources. Examples of batch processing include credit card transactions, for which the transactions are processed monthly rather than in real time.Credit card transactions need only be processed once a month in order to produce a statement for the customer, so batch processing saves IT resources from having to process each transaction individually. Real Time Processing In many circumstances the primary factor is speed.For example, when a bank customer withdraws a sum of money from his or her account it is vital that the transaction be processed and the account balance updated as soon as possible, allowing both the bank and customer to keep track of funds.Sources : Further information regarding transaction processing systems can be found at the University of Illinois and John Hopkins University and http://www.bestpricecomputers.co.uk/glossary/transaction-processing-systems.html


Transaction Process System
A Transaction Processing System (TPS) is a type of information system.TPSs collect, store, modify, and retrieve the transactions of an organization. A transaction is an event that generates or modifies data that is eventually stored in an information system. To be considered a transaction processing system the computer must pass the ACID test.From a technical perspective, a Transaction Processing System (or Transaction Processing Monitor) monitors transaction programs, a special kind of programs. The essence of a transaction program is that it manages data that must be left in a consistent state. E.g. if an electronic payment is made, the amount must be either both withdrawn from one account and added to the other, or none at all. In case of a failure preventing transaction completion, the partially executed transaction must be 'rolled back' by the TPS. While this type of integrity must be provided also for batch transaction processing,it is particularly important for online processing: if e.g. an airline seat reservation system is accessed by multiple operators, after an empty seat inquiry, the seat reservation data must be locked until the reservation is made, otherwise another user may get the impression a seat is still free while it is actually being booked at the time.Without proper transaction monitoring, double bookings may occur. Other transaction monitor functions include deadlock detection and resolution (deadlocks may be inevitable in certain cases of cross-dependence on data), and transaction logging (in 'journals') for 'forward recovery' in case of massive failures.Transaction Processing is not limited to application programs. The 'journaled file system' provided with IBMs AIX Unix operating system employs similar techniques to maintain file system integrity, including a journal.Types of Transaction Processing SystemsContrasted with batch processingBatch processing is not transaction processing. Batch processing involves processing several transactions at the same time, and the results of each transaction are not immediately available when the transaction is being enteredFeatures of Transaction Processing SystemsRapid ResponseFast performance with a rapid response time is critical. Businesses cannot afford to have customers waiting for a TPS to respond, the turnaround time from the input of the transaction to the production for the output must be a few seconds or less.ReliabilityMany organizations rely heavily on their TPS; a breakdown will disrupt operations or even stop the business. For a TPS to be effective its failure rate must be very low. If a TPS does fail, then quick and accurate recovery must be possible. This makes well–designed backup and recovery procedures essential.InflexibilityA TPS wants every transaction to be processed in the same way regardless of the user, the customer or the time for day. If a TPS were flexible, there would be too many opportunities for non-standard operations, for example, a commercial airline needs to consistently accept airline reservations from a range of travel agents, accepting different transactions data from different travel agents would be a problem.Controlled processingThe processing in a TPS must support an organization's operations. For example if an organization allocates roles and responsibilities to particular employees, then the TPS should enforce and maintain this requirement.ACID First Test DefinitionAtomicityA transaction’s changes to the state are atomic: either all happen or none happen. These changes include database changes, messages, and actions on transducers.ConsistencyA transaction is a correct transformation of the state. The actions taken as a group do not violate any of the integrity constraints associated with the state. This requires that the transaction be a correct program!IsolationEven though transactions execute concurrently, it appears to each transaction T, that others executed either before T or after T, but not both.DurabilityOnce a transaction completes successfully (commits), its changes to the state survive failuresStoring and RetrievingStoring and retrieving information from a TPS must be efficient and effective. The data are stored in warehouses or other databases, the system must be well designed for its backup and recovery procedures.Databases and filesThe storage and retrieval of data must be accurate as it is used many times throughout the day. A database is a collection of data neatly organized, which stores the accounting and operational records in the database. Databases are always protective of their delicate data, so they usually have a restricted view of certain data. Databases are designed using hierarchical, network or relational structures; each structure is effective in its own sense.Hierarchical structure: organizes data in a series of levels, hence why it is called hierarchal. Its top to bottom like structure consists of nodes and branches; each child node has branches and is only linked to one higher level parent node.Network structure: Similar to hierarchical, network structures also organizes data using nodes and branches. But, unlike hierarchical, each child node can be linked to multiple, higher parent nodes.Relational structure: Unlike network and hierarchical, a relational database organizes its data in a series of related tables. This gives flexibility as relationships between the tables are built.The following features are included in real time transaction processing systems:Good Data Placement: The database should be designed to access patterns of data from many simultaneous users.Short transactions: Short transactions enables quick processing. This avoids concurrency and paces the systems.Real-time backup: Backup should be scheduled between low times of activity to prevent lag of the server.High normalization: This lowers redundant information to increase the speed and improve concurrency, this also improves backups.Archiving of historical data: Uncommonly used data are moved into other databases or backed up tables. This keeps tables small and also improves backup times.Good hardware configuration: Hardware must be able to handle many users and provide quick response times.In a TPS, there are5 different types of files, the TPS uses the files to store and organize its transaction data:Master file: Contains information about an organization’s business situation. Most transactions and databases are stored in the master file.Transaction file: It is the collection of transaction records. It helps to update the master file and also serves as audit trails and transaction history.Report file: Contains data that has been formatted for presentation to a user.Work file: temporary files in the system used during the processing.Program file: Contains the instructions for the processing of data.Data WarehouseA data warehouse is a database that collects information from different sources. When it's gathered in real-time transactions it can be used for analysis efficiently if it's stored in a data warehouse. It provides data that are consolidated, subject-orientated, historical and read-onlyConsolidated: Data are organised with consistent naming conventions, measurements, attributes and semantics.It allows data from a data warehouse from across the organization to be effectively used in a consistent manner.Subject-orientated: Large amounts of data are stored across an organization, some data could be irrelevant for reports and makes querying the data difficult. It organizes only key business information from operational sources so that it's available for analysis.Historical: Real-time TPS represent the current value at any time, an example could be stock levels. If past data are kept, querying the database could return a different response. It stores series of snapshots for an organisation's operational data generated over a period of time.Read-only: Once data are moved into a data warehouse, it becomes read-only, unless it was incorrect. Since it represents a snapshot of a certain time, it must never be updated. Only operations which occur in a data warehouse are loading and querying data.Backup ProceduresA Dataflow Diagram of backup and recovery procedures.Since business organizations have become very dependent on TPSs, a breakdown in their TPS may stop the business' regular routines and thus stopping its operation for a certain amount of time. In order to prevent data loss and minimize disruptions when a TPS breaks down a well-designed backup and recovery procedure is put into use. The recovery process can rebuild the system when it goes down.Recovery ProcessA TPS may fail for many reasons. These reasons could include a system failure, human errors, hardware failure, incorrect or invalid data, computer viruses, software application errors or natural disasters. So it is logical to assume that it's not possible to keep a TPS from never failing, however because it may fail time to time, it must be able to cope with failures. The TPS must be able to detect and correct errors when they occur. A TPS will go through a recovery of the database to cope when the system fails, it involves the backup, journal, checkpoint and recovery managerJournal: A journal maintains an audit trail of transactions and database changes. Transaction logs and Database change logs are used, a transaction log records all the essential data for each transactions, including data values, time of transaction and terminal number. A database change log contains before and after copies of records that have been modified by transactions.Checkpoint: A checkpoint record contains necessary information to restart the system. These should be taken frequently, such as several times an hour. It is possible to resume processing from the most recent checkpoint when a failure occurs with only a few minutes of processing work that needs to be repeated.Recovery Manager: A recovery manager is a program which restores the database to a correct condition which can restart the transaction processing.Depending on how the system failed, there can be two different recovery procedures used. Generally, the procedures involves restoring data that has been collected from a backup device and then running the transaction processing again. Two types of recovery are backward recovery and forward recoveryBackward recovery: used to undo unwanted changes to the database. It reverses the changes made by transactions which have been aborted. It involves the logic of reprocessing each transaction - which is very time consuming.Forward recovery: it starts with a backup copy of the database. The transaction will then reprocess according to the transaction journal that occurred between the time the backup was made and the present time. It's much faster and more accurate.See also: Checkpoint restartTypes of Back-up ProceduresThere are two main types of Back-up Procedures: Grandfather-father-son and Partial backups:Grandfather-Father-SonThis procedure refers to at least three generations of backup master files. Hence, the most recent backup is the son, the oldest backup is the grandfather. It's commonly used for a batch transaction processing system with a magnetic tape.If the system fails during a batch run, the master file is recreated by using the son backup and then restarting the batch. However if the son backup fails, is corrupted or destroyed, then the next generation up backup (father) is required. Likewise, if that fails, then the next generation up backup (grandfather) is required. Of course the older the generation, the more the data may be out of date. Organizations can have up to twenty generations of backup.^^Partial BackupsThis only occurs when parts of the master file are backed up. The master file is usually backed up to magnetic tape at regular times, this could be daily, weekly or monthly. Completed transactions since the last backup are stored separately and are called journals, or journal files. The master file can be recreated from the journal files on the backup tape if the system is to fail.Updating in a BatchThis is used when transactions are recorded on paper (such as bills and invoices) or when it's being stored on a magnetic tape. Transactions will be collected and updated as a batch at when it's convenient or economical to process them. Historically, this was more widely used as the information technology did not exist to allow real-time processing.The two stages in batch processing are:Collecting and storage of the transaction data into a transaction file - this involves sorting the data into sequential order.Processing the data by updating the master file - which can be difficult, this may involve data additions, updates and deletions that may require to happen in a certain order. If an error occurs, then the entire batch fails.Updating in batch requires sequential access - since it uses a magnetic tape this is the only way to access data. A batch will start at the beginning of the tape, then reading it from the order it was stored; it's very time-consuming to locate specific transactions.The information technology used includes a secondary storage medium which can store large quantities of data inexpensively (thus the common choice of a magnetic tape). The software used to collect data does not have to be online - it doesn't even need a user interface.Updating in Real-TimeThis is the immediate processing of data. It provides instant confirmation of a transaction. This involves a large amount of users who are simultaneously performing transactions to change data. Because of advances in technology (such as the increase in the speed of data transmission and larger bandwidth), real-time updating is now possible.Steps in a real-time update involve the sending of a transaction data to an online database in a master file. The person providing information is usually able to help with error correction and receives confirmation of the transaction completion.Updating in real-time uses direct access of data. This occurs when data are accessed without accessing previous data items. The storage device stores data in a particular location based on a mathematical procedure. This will then be calculated to find an approximate location of the data. If data are not found at this location, it will search through successive locations until it's found.The information technology used could be a secondary storage medium that can store large amounts of data and provide quick access (thus the common choice of a magnetic disk). It requires a user-friendly interface as it's important for rapid response timeSource : http://en.wikipedia.org/wiki/Transaction_Processing_System


Proses Transaksi
Pembukuan berpasangan adalah praktik standar untuk pencatatan transaksi keuangan. Proses pembukuan hanya meliputi pencatatan transaksi-transaksi ke dalam berbagai jurnal dan pemberian klasifikasi kode perkiraan buku besar (yaitu pengumpulan data keuangan mentah). Hal ini akan menjadi dasar untuk sistem akuntansi yang mengumpulkan dan mengorganisir data mentah menjadi informasi yang berguna.Sistem ini didasarkan pada konsep bahwa suatu bisnis dapat dijabarkan dengan menggunakan beberapa variabel atau rekening, yang masing-masing menjelaskan satu aspek dari bisnis tersebut dari sudut moneter.Setiap transaksi memiliki 'efek ganda' yang akan dijelaskan selanjutnya.Sejarah sistem pembukuan ini telah ditemukan sejak abad ke-12, dan pada akhir abad ke-15, sistem ini telah dipergunakan secara meluas oleh pedagang dari Venesia. Kodifikasi sistem dilakukan pertama kali oleh Luca Pacioli, seorang karib dari Leonardo da Vinci, pada sebuah buku teks matematika terbitan tahun 1494Proses pembukuanKetika transaksi terjadi, sebuah dokumen dihasilkan. Dokumen ini dirujuk sebagai sumber dokumen. Beberpa sumber dokumen sebagai berikut:Kuitansi yang anda dapatkan ketika membeli sesuatu di toko.Laporan saldo bank bulanan anda. Semua sumber dokumen ini kemudian dicatat dalam sebuah Journal. Ini juga disebut sebagai sebuah book of first entry. "Journal" mencatat kedua pihak transaksi yang di catat oleh sumber dokumen. These write-ups are known as Journal entries.These Journal entires are then transferred to a Ledger. A ledger is also known as a book of accounts. The purpose of a Ledger is to bring all of the amounts recorded for that account from the Journal together. This process of transferring the values is known as posting.Once the entries have all been posted, the Ledger accounts are added up in a process called Balancing. A particular working document called an unadjusted trial balance is created. This lists all the balances from all the accounts in the Ledger. Notice that the values are not posted to the trial balance, they are merely copied.At this point accounting happens. The accountants produces a number of adjustments which make sure that the values comply with accounting principles. These values are then passed through the accounting system resulting in an adjusted trial balance. This process continues until the accountant is satisfied.Finaly Financial statements are drawn from the trial balance, which may include:the Income statementthe Balance sheetthe Cash flow statement Sumber : http://id.wikipedia.org/wiki/Pembukuan_berpasangan


Enterprise resource planning
Enterprise resource planning (ERP)systems attempt to integrate several data sources and processes of an organization into a unified system. A typical ERP system will use multiple components of computer software and hardware to achieve the integration. A key ingredient of most ERP systems is the use of a unified database to store data for the various system modules.The two key components of an ERP system are a common database and a modular software design. A common database is the system that allows every department of a company to store and retrieve information in real-time. Using a common database allows information to be more reliable, accessible, and easily shared. Furthermore, a modular software design is a variety of programs that can be added on an individual basis to improve the efficiency of the business. This improves the business by adding functionality, mixing and matching programs from different vendors, and allowing the company to choose which modules to implement. These modular software designs link into the common database, so that all of the information between the departments is accessible in real time.Origin of the termMRP vs. ERP — Manufacturing management systems have evolved in stages over the past 30 years from a simple means of calculating materials requirements to the automation of an entire enterprise. Around 1980, over-frequent changes in sales forecasts, entailing continual readjustments in production, as well as the unsuitability of the parameters fixed by the system, led MRP (Material Requirement Planning) to evolve into a new concept : Manufacturing Resource Planning (or MRP2) and finally the generic concept Enterprise Resource Planning (ERP)The initials ERP originated as an extension of MRP (material requirements planning then manufacturing resource planning) and CIM (computer-integrated manufacturing) and was introduced by research and analysis firm Gartner. ERP systems now attempt to cover all basic functions of an enterprise, regardless of the organization's business or charter. Non-manufacturing businesses, non-profit organizations and governments now all utilize ERP systems.To be considered an ERP system, a software package must provide the function of at least two systems. For example, a software package that provides both payroll and accounting functions could technically be considered an ERP software package.However, the term is typically reserved for larger, more broadly based applications. The introduction of an ERP system to replace two or more independent applications eliminates the need for external interfaces previously required between systems, and provides additional benefits that range from standardization and lower maintenance (one system instead of two or more) to easier and/or greater reporting capabilities (as all data is typically kept in one database).Examples of modules in an ERP which formerly would have been stand-alone applications include: Manufacturing, Supply Chain, Financials, Customer Relationship Management (CRM), Human Resources, Warehouse Management and Decision Support System.OverviewSome organizations — typically those with sufficient in-house IT skills to integrate multiple software products — choose to implement only portions of an ERP system and develop an external interface to other ERP or stand-alone systems for their other application needs. For example, one may choose to use human resource management system from one vendor, and the financial systems from another, and perform the integration between the systems themselves.This is very common in the retail sector[citation needed], where even a mid-sized retailer will have a discrete Point-of-Sale (POS) product and financials application, then a series of specialized applications to handle business requirements such as warehouse management, staff rostering, merchandising and logistics.Ideally, ERP delivers a single database that contains all data for the software modules, which would include:Manufacturing Engineering, Bills of Material, Scheduling, Capacity, Workflow Management, Quality Control, Cost Management, Manufacturing Process, Manufacturing Projects, Manufacturing FlowSupply Chain Management Inventory, Order Entry, Purchasing, Product Configurator, Supply Chain Planning, Supplier Scheduling, Inspection of goods, Claim Processing, Commission CalculationFinancials General Ledger, Cash Management, Accounts Payable, Accounts Receivable, Fixed AssetsProjects Costing, Billing, Time and Expense, Activity ManagementHuman Resources Human Resources, Payroll, Training, Time & Attendance, Rostering, BenefitsCustomer Relationship Management Sales and Marketing, Commissions, Service, Customer Contact and Call Center supportData Warehouse and various Self-Service interfaces for Customers, Suppliers, and EmployeesEnterprise Resource Planning is a term originally derived from manufacturing resource planning (MRP II) that followed material requirements planning (MRP).MRP evolved into ERP when "routings" became a major part of the software architecture and a company's capacity planning activity also became a part of the standard software activity.[citation needed] ERP systems typically handle the manufacturing, logistics, distribution, inventory, shipping, invoicing, and accounting for a company. Enterprise Resource Planning or ERP software can aid in the control of many business activities, like sales, marketing, delivery, billing, production, inventory management, quality management, and human resource management.ERP systems saw a large boost in sales in the 1990s as companies faced the Y2K problem in their legacy systems. Many companies took this opportunity to replace their legacy information systems with ERP systems. This rapid growth in sales was followed by a slump in 1999, at which time most companies had already implemented their Y2K solutionERPs are often incorrectly called back office systems indicating that customers and the general public are not directly involved. This is contrasted with front office systems like customer relationship management (CRM) systems that deal directly with the customers, or the eBusiness systems such as eCommerce, eGovernment, eTelecom, and eFinance, or supplier relationship management (SRM) systems.ERPs are cross-functional and enterprise wide. All functional departments that are involved in operations or production are integrated in one system. In addition to manufacturing, warehousing, logistics, and information technology, this would include accounting, human resources, marketing, and strategic management.ERP II means open ERP architecture of components. The older, monolithic ERP systems became component oriented.[citation needed]EAS — Enterprise Application Suite is a new name for formerly developed ERP systems which include (almost) all segments of business, using ordinary Internet browsers as thin clients.[citation needed]BeforePrior to the concept of ERP systems, it was not unusual for each department within an organization to have its own customized computer system. For example, the human resources (HR) department, the payroll department, and the financial department might all have their own computer systems.Typical difficulties involved integration of data from potentially different computer manufacturers and systems. For example, the HR computer system (often called HRMS or HRIS) would typically manage employee information while the payroll department would typically calculate and store paycheck information for each employee, and the financial department would typically store financial transactions for the organization. Each system would have to integrate using a predefined set of common data which would be transferred between each computer system. Any deviation from the data format or the integration schedule often resulted in problems.AfterERP software, among other things, combined the data of formerly separate applications. This simplified keeping data in synchronization across the enterprise, it simplified the computer infrastructure within a large organization, and it standardized and reduced the number of software specialties required within larger organizations.source : http://en.wikipedia.org/wiki/Enterprise_resource_planning


Perencanaan sumber daya perusahaan
Perencanaan sumber daya perusahaan, atau sering disingkat ERP dari istilah bahasa Inggrisnya, enterprise resource planning, adalah sistem informasi yang diperuntukkan bagi perusahan manufaktur maupun jasa yang berperan mengintegrasikan dan mengotomasikan proses bisnis yang berhubungan dengan aspek operasi, produksi maupun distribusi di perusahaan bersangkutan.Sejarah ERPERP berkembang dari Manufacturing Resource Planning (MRP II) dimana MRP II sendiri adalah hasil evolusi dari Material Requirement Planning (MRP) yang berkembang sebelumnya. Sistem ERP secara modular biasanya menangani proses manufaktur, logistik, distribusi, persediaan (inventory), pengapalan, invoice dan akunting perusahaan. Ini berarti bahwa sistem ini nanti akan membantu mengontrol aktivitas bisnis seperti penjualan, pengiriman, produksi, manajemen persediaan, manajemen kualitas dan sumber daya manusia.Karakter Sistem ERPERP sering disebut sebagai Back Office System yang mengindikasikan bahwa pelanggan dan publik secara umum tidak dilibatkan dalam sistem ini. Berbeda dengan Front Office System yang langsung berurusan dengan pelanggan seperti sistem untuk e-Commerce, Customer Relationship Management (CRM), e-Government dan lain-lain.Modul ERPSecara modular, software ERP biasanya terbagi atas modul utama yakni Operasi serta modul pendukung yakni Finansial dan Akunting serta Sumber Daya Manusia-Modul OperasiGeneral Logistics, Sales and Distribution, Materials Management, Logistics Execution, Quality Management, Plant Maintenance, Customer Service, Production Planning and Control, Project System, Environment Management-Modul Finansial dan AkuntingGeneral Accounting, Financial Accounting, Controlling, Investment Management, Treasury, Enterprise Controlling,-Modul Sumber Daya ManusiaPersonnel Management, Personnel Time Management, Payroll, Training and Event Management, Organizational Management, Travel Management,Keuntungan penggunaan ERP-Integrasi data keuanganUntuk mengintegrasikan data keuangan sehingga top management bisa melihat dan mengontrol kinerja keuangan perusahaan dengan lebih baik-Standarisasi Proses OperasiMenstandarkan proses operasi melalui implementasi best practice sehingga terjadi peningkatan produktivitas, penurunan inefisiensi dan peningkatan kualitas produk-Standarisasi Data dan InformasiMenstandarkan data dan informasi melalui keseragaman pelaporan, terutama untuk perusahaan besar yang biasanya terdiri dari banyak business unit dengan jumlah dan jenis bisnis yg berbeda-beda-Keuntungan yg bisa diukurPenurunan inventoriPenurunan tenaga kerja secara totalPeningkatan service levelPeningkatan kontrol keuanganPenurunan waktu yang di butuhkan untuk mendapatkan informasiMemilih ERP-Latar BelakangInvestasi ERP sangat mahal dan pilihan ERP yang salah bisa menjadi mimpi burukERP yang berhasil digunakan oleh sebuah perusahaan tidak menjadi jaminan berhasil di perusahaan yang lain -Perencanaan harus dilakukan untuk menyeleksi ERP yg tepatBahkan dalam beberapa kasus yang ekstrim, evaluasi pilihan ERP menghasilkan rekomendasi untuk tidak membeli ERP, tetapi memperbaiki Business Process yang adaTidak ada ‘keajaiban’ dalam ERP software. Keuntungan yang didapat dari ERP adalah hasil dari persiapan dan implementasi yang efektifTidak ada software atau sistem informasi yang bisa menutupi business strategy yang cacat dan business process yang ‘parah’-Secara singkat, tidak semua ERP sama kemampuannya dan memilih ERP tidaklah mudah (paling tidak, tidaklah sederhana), dan memilih ERP yang salah akan menjadi bencana yang mahal3 syarat sukses ERP-Knowledge-Experience-Knowledge & ExperienceKnowledge adalah pengetahuan tentang bagaimana cara sebuah proses seharusnya dilakukan, jika segala sesuatunya berjalan lancarExperience adalah pemahaman terhadap kenyataan tentang bagaimana sebuah proses seharusnya dikerjakan dengan kemungkinan munculnya permasalahanKnowledge tanpa experience menyebabkan orang membuat perencanaan yang terlihat sempurna tetapi kemudian terbukti tidak bisa diimplementasikanExperience tanpa knowledge bisa menyebabkan terulangnya atau terakumulasinya kesalahan dan kekeliruan karena tidak dibekali dengan pemahaman yg cukupSelection MethodologyMetodologiAda struktur proses seleksi yang sebaiknya dilakukan untuk memenuhi kebutuhan perusahaan dalam memilih ERPProses seleksi tidak harus selalu rumit agar efektif. Yang penting organized, focused dan simpleProses seleksi ini biasanya berkisar antara 5-6 bulan sejak dimulai hingga penandatanganan order pembelian ERP(BK. Khaitan, weblink)Berikut ini adalah akivitas yg sebaiknya dilakukan sebagai bagian dari proses pemilihan software ERP: analisa strategi bisnis, analisa sumber daya manusia, analisa infrastruktur dan analisa softwareAnalisa Business StrategyBagaimana level kompetisi di pasar dan apa harapan dari customers?Adakah keuntungan kompetitif yang ingin dicapai?Apa strategi bisnis perusahaan dan objectives yang ingin dicapai?Bagaimana proses bisnis yang sekarang berjalan vs proses bisnis yang diinginkan?Adakah proses bisnis yang harus diperbaiki?Apa dan bagaimana prioritas bisnis yang ada dan adakah rencana kerja yang disusun untuk mencapai objektif dan prioritas tersebut?Target bisnis seperti apa yang harus dicapai dan kapan?Analisa PeopleBagaimana komitment top management thd usaha untuk implementasi ERP?Siapa yg akan mengimplementasikan ERP dan siapa yg akan menggunakannya?Bagaimana komitmen dari tim implementasi?Apa yg diharapkan para calon user thd ERP?Adakah ERP champion yg menghubungkan top management dgn tim?Adakah konsultan dari luar yg disiapkan untuk membantu proses persiapan?Analisa InfrastrukturBagaimanakah kelengkapan infrastruktur yang sudah ada (overall networks, permanent office systems, communication system dan auxiliary system)Seberapa besar budget untuk infrastruktur?Apa infrastruktur yang harus disiapkan?Analisa SoftwareApakah software tsb cukup fleksibel dan mudah disesuaikan dengan kondisi perusahaan?Apakah ada dukungan service dari supplier, tidak hanya secara teknis tapi juga untuk kebutuhan pengembangan sistem di kemudian hariSeberapa banyak waktu untuk implementasi yg tersediaApakah software memiliki fungsi yang bisa meningkatkan proses bisnis perusahaan Sumber : http://id.wikipedia.org/wiki/Perencanaan_sumber_daya_perusahaan


Pengertian Artificial Intelligence
Kecerdasan buatan. Suatu cabang dalam bidang sains komputer Sains yang mengkaji tentang bagaimana untuk melengkapi sebuah komputer dengan kemampuan atau kepintaran seperti manusia. Sebagai contoh, bagaimana komputer bisa belajar sendiri dari pengalaman dan data-data yang telah dikumpulkannya, bagaimana komputer mampu berkomunikasi dan mengucapkan kata demi kata. Dengan kemampuan ini, diharapkan komputer mampu mengambil keputusan sendiri untuk berbagai kasus yang ditemuinya. Industri AI ini berkembang semenjak tahun 1980an, meskipun sudah dimulai dari tahun 1970anEvolusi AI berjalan dalam dua jalur yang berbeda. Pertama, untuk menciptakan sistem komputer yang meniru proses berpikir manusia untuk menyelesaikan permasalahan umum. Misalnya program permainan catur. Kedua, mengkombinasikan pemikiran terbaik para ahli pada sepotong software yang dirancang untuk memecahkan persoalan yang spesifik. Biasanya disebut juga dengan expert system, atau sistem pakar. Misalnya bagaimana seorang dokter menentukan penyakit seseorang, mulai dari tanya jawab, pemeriksaan kondisi tubuh seperti mata, tekanan darah, suhu tubuh dan sebagainya. Langkah-langkah ini pula yang berusaha diterapkan ke komputer yang mampu berpikir seperti pakar tersebut.Program AI adalah program komputer yang berupaya untuk memenuhi tujuan di atas. Pemrograman AI umumnya menggunakan bahasa pemrograman khusus seperti : Prolog, Lisp dan sebagainya. Alasan digunakannya bahasa pemrograman yang khusus adalah karena memang ada perbedaaan mendasar antara pemrograman AI dan pemrograman konvensional. Apa bedanya…Bedanya begini… Pemrograman AI (khususnya Prolog) memiliki sifat deklaratif, sedangkan pemrograman konvensional bersifat prosedural. Pada bahasa pemrograman prosedural, programmer (pemrogram) memberikan program pada komputer secara How to do, yaitu bagaimana melakukan sesuatu. Artinya komputer harus diberitahu tentang langkah-langkah (algoritma) memecahkan suatu permasalahan. Sedangkan pada bahasa pemrograman deklaratif pemrogram memberikan program pada komputer secara What to do, yakni apa yang harus dilakukan.Jadi pada pemrograman deklaratif komputer hanya diberitahu tentang data/fakta dan aturan yang berlaku berdasarkan permasalahan tanpa memberitahu bagaimana semestinya masalah tersebut dipecahkan. Tugas menemukan cara pemecahan masalah tersebut dilakukan oleh komputer (Nah atas prinsip inilah maka komputer baru boleh disebut ‘cerdas’, ini ‘kunci’ jawaban untuk mahasiswa yang sedang ngambil kuliah AI dengan saya, makanya ikuti blog ini terus, he he sambilan promosi).Umumnya kebanyakan pemrogram lebih mengetahui bahasa prosedural seperti C/C++ dari pada bahasa pemrograman AI seperti tersebut di atas. Di sisi lain paradigma dari pemrograman AI tampaknya cukup menjanjikan untuk menjadi bahasa pemrograman masa depan. Dapatkah sebagian dari paradigma pemrograman AI tersebut diimplementasikan menggunakan bahasa pemrograman prosedural ?Baik…, ada dua fakta urgen sehingga pertanyaan di atas mesti terjawab.Pertama kebanyakan pemrogram memiliki latar belakang yang minim di bidang AI. Dan kedua, kebanyakan peminat/pakar AI menggunakan bahasa pemrograman khusus untuk AI dalam mengimplementasikan teknik-teknik AI. Sementara program-program masa kini yang umumnya dibuat oleh pemrogram yang menggunakan bahasa prosedural juga membutuhkan teknik-teknik AI demi meningkatkan unjuk kerjanya.Kedua fakta di atas dapatkah ditemukan jalan tengahnya (Ibarat menyatukan dua dunia kan ) ?Cara Pertama. Sebenarnya modul program yang ditulis dengan Turbo Prolog dapat diperantarai (istilahnya interfacing) dengan modul yang ditulis dengan C/C++ misalnya, tetapi ini bukan pemecahan yang paling tepat.Dengan menggunakan dua bahasa pemrograman yang terpisah berarti membutuhkan manajemen dan koordinasi yang lebih, karena bagaimanapun menghubungkan dua sistem yang berbeda akan memerlukan pengetahuan yang rinci tentang kedua sistem tersebut. Juga dengan menggunakan bahasa pemrograman terpisah berarti para pemrogram umum harus menguasai bahasa pemrograman AI tersebut secara matang.Cara Kedua. Penulis memiliki ‘gagasan’ (alternatif) pemecahan yang lebih baik (menurut dia aja kali), yaitu dengan mengkaji peng-implementasi-an teknik-teknik pemrograman AI menggunakan bahasa pemrograman umum/prosedural seperti C, C++ atau JAVA. Apa bisa ya ?Untuk itu ada beberapa hal yang harus terjawab. Pertama, sejauhmana paradigma pemrograman AI (khususnya sifat deklaratifnya) dapat diimplementasikan menggunakan bahasa pemrograman non-deklaratif seperti C++atau JAVA.Kedua, dapatkah implementasi tersebut dibuat dalam bentuk library khusus yang selanjutnya akan dijadikan sebagai perangkat bantu untuk pengembangan program-program AI di kemudian hari . Ketiga, harus diuji bagaimana efektivitas library (perangkat bantu) tersebut dalam memberikan kemudahan perancangan program-program AI dan bagaimana unjuk kerja (performance) dari program yang dihasilkannya.Penulis juga sedang mengkaji kemungkinan penerapan ‘AI library’ (Intelligent Package) untuk JAVA, sementara yang terakhir ini masih dalam taraf kajian fasilitas-fasilitas yang dibutuhkan. Kita layak aja berbagi…Sumber : http://www.total.or.id/info.php?kk=artificial%20Intelligencehttp://herianto.unsada.ac.id/?p=39


Artificial Intelligence Part II
Problems of AIWhile there is no universally accepted definition of intelligence, AI researchers have studied several traits that are considered essential. Deduction, reasoning, problem solvingEarly AI researchers developed algorithms that imitated the process of conscious, step-by-step reasoning that human beings use when they solve puzzles, play board games, or make logical deductions. By the late 80s and 90s, AI research had also developed highly successful methods for dealing with uncertain or incomplete information, employing concepts from probability and economics.For difficult problems, most of these algorithms can require enormous computational resources — most experience a "combinatorial explosion": the amount of memory or computer time required becomes astronomical when the problem goes beyond a certain size. The search for more efficient problem solving algorithms is a high priority for AI research.It is not clear, however, that conscious human reasoning is any more efficient when faced with a difficult abstract problem. Cognitive scientists have demonstrated that human beings solve most of their problems using unconscious reasoning, rather than the conscious, step-by-step deduction that early AI research was able to model. Embodied cognitive science argues that unconscious sensorimotor skills are essential to our problem solving abilities. It is hoped that sub-symbolic methods, like computational intelligence and situated AI, will be able to model these instinctive skills. The problem of unconscious problem solving, which forms part of our commonsense reasoning, is largely unsolvedKnowledge representationKnowledge representationand knowledge engineering are central to AI research. Many of the problems machines are expected to solve will require extensive knowledge about the world. Among the things that AI needs to represent are: objects, properties, categories and relations between objects; situations, events, states and time; causes and effects; knowledge about knowledge (what we know about what other people know); and many other, less well researched domains. A complete representation of "what exists" is an ontology (borrowing a word from traditional philosophy), of which the most general are called upper ontologies.Among the most difficult problems in knowledge representation are:Default reasoning and the qualification problem: Many of the things people know take the form of "working assumptions." For example, if a bird comes up in conversation, people typically picture an animal that is fist sized, sings, and flies. None of these things are true about birds in general.John McCarthy identified this problem in 1969as the qualification problem: for any commonsense rule that AI researchers care to represent, there tend to be a huge number of exceptions. Almost nothing is simply true or false in the way that abstract logic requires. AI research has explored a number of solutions to this problem.Unconscious knowledge: Much of what people know isn't represented as "facts" or "statements" that they could actually say out loud. They take the form of intuitions or tendencies and are represented in the brain unconsciously and sub-symbolically. This unconscious knowledge informs, supports and provides a context for our conscious knowledge. As with the related problem of unconscious reasoning, it is hoped that situated AI or computational intelligence will provide ways to represent this kind of knowledge.The breadth of common sense knowledge: The number of atomic facts that the average person knows is astronomical. Research projects that attempt to build a complete knowledge base of commonsense knowledge, such as Cyc, require enormous amounts of tedious step-by-step ontological engineering — they must be built, by hand, one complicated concept at a time.PlanningIntelligent agents must be able to set goals and achieve them. They need a way to visualize the future: they must have a representation of the state of the world and be able to make predictions about how their actions will change it. They must also attempt to determine the utility or "value" of the choices available to it.In some planning problems, the agent can assume that it is the only thing acting on the world and it can be certain what the consequences of its actions may be. However, if this is not true, it must periodically check if the world matches its predictions and it must change its plan as this becomes necessary, requiring the agent to reason under uncertainty.Multi-agent planning tries to determine the best plan for a community of agents, using cooperation and competition to achieve a given goal. Emergent behavior such as this is used by both evolutionary algorithms and swarm intelligence.Important machine learning problems are:Unsupervised learning: find a model that matches a stream of input "experiences", and be able to predict what new "experiences" to expect.Supervised learning, such as classification (be able to determine what category something belongs in, after seeing a number of examples of things from each category), or regression (given a set of numerical input/output examples, discover a continuous function that would generate the outputs from the inputs).Reinforcement learning: the agent is rewarded for good responses and punished for bad ones. (These can be analyzed in terms decision theory, using concepts like utility).Natural language processingNatural language processing gives machines the ability to read and understand the languages human beings speak. Many researchers hope that a sufficiently powerful natural language processing system would be able to acquire knowledge on its own, by reading the existing text available over the internet. Some straightforward applications of natural language processing include information retrieval (or text mining) and machine translation. Motion and manipulationASIMO uses sensors and intelligent algorithms to avoid obstacles and navigate stairs.roboticsThe field of roboticsis closely related to AI. Intelligence is required for robots to be able to handle such tasks as object manipulation and navigation, with sub-problems of localization (knowing where you are), mapping (learning what is around you) and motion planning (figuring out how to get there).PerceptionMachine perception is the ability to use input from sensors (such as cameras, microphones, sonar and others more exotic) to deduce aspects of the world. Computer visionis the ability to analyze visual input. A few selected subproblems are speech recognition, facial recognition and object recognition.Social intelligenceKismet, a robot with rudimentary social skills.Emotion and social skills play two roles for an intelligent agentIt must be able to predict the actions of others, by understanding their motives and emotional states. (This involves elements of game theory, decision theory, as well as the ability to model human emotions and the perceptual skills to detect emotions.)For good human-computer interaction, an intelligent machine also needs to display emotions — at the very least it must appear polite and sensitive to the humans it interacts with. At best, it should appear to have normal emotions itself. CreativityA sub-field of AI addresses creativity both theoretically (from a philosophical and psychological perspective) and practically (via specific implementations of systems that generate outputs that can be considered creative).General intelligenceMost researchers hope that their work will eventually be incorporated into a machine with general intelligence (known as strong AI), combining all the skills above and exceeding human abilities at most or all of them. A few believe that anthropomorphic features like artificial consciousness or an artificial brain may be required for such a project.Many of the problems above are considered AI-complete: to solve one problem, you must solve them all. For example, even a straightforward, specific task like machine translation requires that the machine follow the author's argument (reason), know what it's talking about (knowledge), and faithfully reproduce the author's intention (social intelligence). Machine translation, therefore, is believed to be AI-complete: it may require strong AI to be done as well as humans can do itApproaches to AIThere are as many approaches to AI as there are AI researchers—any coarse categorization is likely to be unfair to someone. Artificial intelligence communities have grown up around particular problems, institutions and researchers, as well as the theoretical insights that define the approaches described below. Artificial intelligence is a young science and is still a fragmented collection of subfields. At present, there is no established unifying theory that links the subfields into a coherent whole.Cybernetics and brain simulationIn the 40s and 50s, a number of researchers explored the connection between neurology, information theory, and cybernetics. Some of them built machines that used electronic networks to exhibit rudimentary intelligence, such as W. Grey Walter's turtles and the Johns Hopkins Beast. Many of these researchers gathered for meetings of the Teleological Society at Princeton and the Ratio Club in England.Traditional symbolic AIWhen access to digital computers became possible in the middle 1950s, AI research began to explore the possibility that human intelligence could be reduced to symbol manipulation. The research was centered in three institutions: CMU, Stanford and MIT, and each one developed its own style of research. John Haugeland named these approaches to AI "good old fashioned AI" or "GOFAI".