AW Software Products ExcitingAds :: Exciting Ads

kahm-logo

Sitemap

 
"Seriendruck mit Microsoft Word"
"Messenger hin, E-Mail her: Serienbriefe sind auch in Zeiten von Social Media nach wie vor top aktuell. Egal ob im Verein oder im kleinen Unternehmen: Immer dann wenn es darum geht eine grere Empfngerzahl effizient und dennoch mit einer persnlichen Note anzusprechen schlgt die Stunde von Serienbrief & Co.Doch viele Ersteller von Serienbriefen verschenken eine ganze Menge des wirklichen Potenzials, das in der Seriendruck-Funktion von Microsoft Word tatschlich steckt.Denn mit Hilfe von Regeln und Feldern knnen beispielsweise: unterschiedliche Texte im Dokument erscheinen, je nachdem ob eine bestimmte Bedingung erfllt ist oder nicht,oder tagesaktuelle Angaben, die erst unmittelbar vor dem Seriendruck abgefragt werden.Es soll nur ein Teil der in der verwendeten Liste enthalten Adressaten verwendet werden? Kein Problem: Die passende Regel berspringt alle Eintrge, die nicht dem gewnschten Kriterium entsprechen.Auch bei den Dokumenttypen, in die die Daten letztlich eingefgt werden, ist Vielfalt angesagt: Neben einem Brief kann dann ganz klassisch auch ein Satz Umschlge oder Etiketten beim Seriendruck herauskommen, aber eben auch ein Verzeichnis oder gar eine personalisierte Serien-E-Mail.Je nach Umfang der verwendeten Adressliste stellt sich auch die Frage, welches Dateiformat als Datenquelle verwendet werden soll: Reicht eine einfache Excel-Tabelle? Oder braucht es doch eher Access? Kann ich auch ganz ohne zustzliche Programme den Seriendruck nutzen?Genau diese und viele weitere Fragen beantwortet Dir dieser Kurs!Das Beste dabei ist: All das lsst sich ohne jede technischen Vorkenntnisse nutzen! Ganz normale Kenntnisse im Umgang mit Word reichen dazu vllig aus!Schritt fr Schritt wirst Du Dich vom ersten einfachen Serienbrief hinentwickeln zu einem absoluten Seriendruck-Spezialisten, den selbst die Verschachtelung mehrerer Bedingungen genauso wenig in Verlegenheit bringt wie beispielsweise die Verwendung des Outlook-Adressbuchs als Datenquelle.Anhand der mitgelieferten Begleitmaterialien kannst Du die Inhalte der einzelnen Lektionen dabei jederzeit auf dem eigenen Rechner nachvollziehen und vertiefen.'Seriendruck mit Microsoft Word 2019' is an independent seminar and is neither affiliated with, nor authorized, sponsored or approved by Microsoft Corporation!"
Price: 29.99

"Apache Hive Interview Question and Answer (100+ FAQ)"
"Apache Hive Interview Questions has a collection of 100+ questions with answers asked in the interview for freshers and experienced (Programming, Scenario-Based, Fundamentals, Performance Tuning based Question and Answer). This course is intended to help Apache Hive Career Aspirants to prepare for the interview.We are planning to add more questions in upcoming versions of this course. The Apache Hive data warehouse software facilitates reading, writing, and managing large datasets residing in distributed storage using SQL. Structure can be projected onto data already in storage. A command line tool and JDBC driver are provided to connect users to Hive.Course Consist of the Interview Question on the following TopicsHive TutorialHive SQL Language Manual: Commands, CLIs, Data Types,DDL (create/drop/alter/truncate/show/describe), Statistics (analyze), Indexes, Archiving,DML (load/insert/update/delete/merge, import/export, explain plan),Queries (select), Operators and UDFs, Locks, AuthorizationFile Formats and Compression: RCFile, Avro, ORC, Parquet; Compression, LZOProcedural Language: Hive HPL/SQLHive Configuration PropertiesHive ClientsHive Client (JDBC, ODBC, Thrift)HiveServer2: Overview, HiveServer2 Client and Beeline, Hive MetricsHive Web InterfaceHive SerDes: Avro SerDe, Parquet SerDe, CSV SerDe, JSON SerDeHive Counters"
Price: 19.99

"Docker Interview Question and Answer (100+ FAQ)"
"Docker Interview Questions has a collection of 100+ questions with answers asked in the interview for freshers and experienced (Programming, Scenario-Based, Fundamentals, Performance Tuning based Question and Answer).This course is intended to help Docker Career Aspirants to prepare for the interview. We are planning to add more questions in upcoming versions of this course.Docker is a set of platform as a service products that use OS-level virtualization to deliver software in packages called containers. Containers are isolated from one another and bundle their own software, libraries and configuration files; they can communicate with each other through well-defined channels.Course Consist of the Interview Question on the following TopicsThe Docker platformDocker EngineDocker architectureThe Docker daemonThe Docker clientDocker registriesDocker objectsImages ContainersNamespacesControl groupsUnion file systemsContainer format"
Price: 19.99

"Apache Kafka Interview Question and Answer(100+ FAQ)"
"Apache Kafka Interview Questions has a collection of 100+ questions with answers asked in the interview for freshers and experienced (Programming, Scenario-Based, Fundamentals, Performance Tuning based Question and Answer).This course is intended to help Apache Kafka Career Aspirants to prepare for the interview. We are planning to add more questions in upcoming versions of this course.Apache Kafka is an open-source stream-processing software platform developed by LinkedIn and donated to the Apache Software Foundation, written in Scala and Java. The project aims to provide a unified, high-throughput, low-latency platform for handling real-time data feeds.Course Consist of the Interview Question on the following Topics1. Kafka Core2. Kafka APIS3. CONFIGURATION of Kafka4. DESIGN of Kafka5. IMPLEMENTATION of Kafka6. OPERATIONS of Kafka7. SECURITY of Kafka8. KAFKA CONNECT9. KAFKA STREAMS"
Price: 19.99

"Olympic Games Analytics Project in Apache Spark for beginner"
"In this course you will learn to Analyze data (Olympic Game) in Apache Spark using Databricks Notebook (Community edition), 1) Basics flow of data in Apache Spark, loading data, and working with data, this course shows you how Apache Spark is perfect for Big Data Analysis job. 2) Learn basics of Databricks notebook by enrolling into Free Community Edition Server 3) Olympic Games Analytics a real world examples. 4) Graphical  Representation of Data using Databricks notebook.5) Hands-on learning6) Real-time Use Case7) Publish the Project on Web to Impress your recruiter About Databricks: Databricks lets you start writing Spark queries instantly so you can focus on your data problems.Lets discover more about the Olympic Games using Apache SparkData:Data exploration about the recent history of the Olympic GamesWe will explore a dataset on the modern Olympic Games, including all the Games from Athens 1896 to Rio 2016."
Price: 19.99

"Apache Pig Interview Questions and Answers"
"Apache Pig Interview Questions has a collection of 50+ questions with answers asked in the interview for freshers and experienced (Programming, Scenario-Based, Fundamentals, Performance Tuning based Question and Answer).This  course is intended to help Apache Pig Career Aspirants to prepare for the interview. We are planning to add more questions in upcoming versions of this course.Apache Pig is a platform for analyzing large data sets that consists of a high-level language for expressing data analysis programs, coupled with infrastructure for evaluating these programs. The salient property of Pig programs is that their structure is amenable to substantial parallelization, which in turns enables them to handle very large data sets.Course Consist of the Interview Question on the following TopicsPig CorePig Latin Built In FunctionsUser Defined FunctionsControl StructuresShell and Utililty CommandsPerformance and EfficiencyTesting and DiagnosticsVisual EditorsAdministrationIndexMiscellaneous"
Price: 19.99

"Apache Spark Project World Development Indicators Analytics"
"In this Apache Spark course you will learn to Analyze data (World Bank Dataset) in Apache Spark using Databricks Notebook (Community edition), 1) Basics flow of data in Apache Spark, loading data, and working with data, this course shows you how Apache Spark is perfect for Big Data Analysis job. 2) Learn basics of Databricks notebook by enrolling into Free Community Edition Server 3) World Development Indicators Analytics Project a real world examples. 4) Graphical  Representation of Data using Databricks notebook.5) Publish the Project on Web to Impress your recruiter 6) Hands-on learningAbout Databricks: Databricks lets you start writing Spark queries instantly so you can focus on your data problems.Lets discover more about the World Development Indicators Analytics Project using Apache SparkData:The World Development Indicators from the World Bank contain over a thousand annual indicators of economic development from hundreds of countries around the world."
Price: 19.99

"Learn Apache Spark to Generate Weblog Reports for Websites"
"Apache Spark is a flexible and fast framework designed for managing huge volumes of data. The engine supports the use of multiple programming languages, including Python, Scala, Java, and R. Therefore, before starting to learn Apache Spark use, you might want to focus on one of these languages.In this Apache Spark tutorial, we will be focusing on the eCommerce weblog report generation. For companies that are highly dependent on their web presence and popularity, it is crucial to determine the factors that might be related to a successful eCommerce strategy. As a result, some business-owners consider analyzing weblogs. During Apache Spark training, you will be introduced with a variety of reports that you can generate from these weblogs.What is Apache Spark?To learn Apache Spark, you need to be introduced to the basic principles of this engine. First of all, it is a framework for improving speed, simplicity of use, and streaming analytics spread by Apache. Apache Spark is an extremely efficient tool for performing data processing analysis.What are weblogs?A weblog can provide you with insightful information about how your visitors act on your website. By definition, weblog records the actions of users. They might be useful when aiming to determine which parts of your website attract the most attention. Logs can reveal how people found your website (for instance, search engines) and which keywords they used for searches.What will you find in this course?In this course for people that have chosen to learn Apache Spark, we will be focusing on a practical project to improve your skills. There will be some basics of how to use Spark, but you are expected to have a decent understanding of the way it works.For our project, you will have to download several files: they are a must for this Spark tutorial. Then, we will start by exploring file-level details and the process of creating a free account in DataBricks.The aim of the project in this course to learn Apache Spark is to review all of the possible reports that you can conduct from the weblogs. We will be retrieving critical information from the log files. For this purpose, we will use the DataBricks Notebook. As a brief reminder: DataBricks allows you to write Spark queries instantly without having to focus on data problems. It is considered as one of the programs to help you manage and organize data.We will learn how to use Spark to generate various types of reports. For instance, a session report provides information about the session activity, referring to the actions that a user with a unique IP performs during a specified period. The number of user sessions determines the amount of traffic that websites receive.This Apache Spark training course will also focus on a pageview report, which determines how many pages were viewed during a specified time. Additionally, you will learn about a new visitor report, indicating the number of new users that have visited the website during a given time.To learn Apache Spark better, you will be introduced with referring domains report, target domains report, top IP addresses report, search query report, and more!In this course, you will learn to create Weblog Report Generation for Ecommerce website log in Apache Spark using Databricks Notebook (Community edition), 1) Basics flow of data in Apache Spark, loading data, and working with data, this course shows you how Apache Spark is perfect for Big Data Reporting Engine. 2) Learn the basics of Databricks notebook by enrolling into Free Community Edition Server 3) Ecommerce Weblog Tracking Report generation Project real-world example. 4) Graphical  Representation of Data using Databricks notebook.5) Create a Data Pipeline6) Launching Spark Cluster7) Process that data using Apache Spark8) Publish the Project on Web to Impress your recruiter About Databricks: Databricks lets you start writing Spark queries instantly so you can focus on your data problems.Let's discover more about the Ecommerce Weblog Tracking Report generation Project using Apache SparkData:The data is Weblog or Website log of Ecommerce Server (Unreal Data for Training Purpose)"
Price: 19.99

"Apache Hadoop and Mapreduce Interview Questions and Answers"
"Apache Hadoop and Mapreduce Interview Questions has a collection of 120+ questions with answers asked in the interview for freshers and experienced (Programming, Scenario-Based, Fundamentals, Performance Tuning based Question and Answer).This  course is intended to help Apache Hadoop and Mapreduce Career Aspirants to prepare for the interview. We are planning to add more questions in upcoming versions of this course.The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. Rather than rely on hardware to deliver high-availability, the library itself is designed to detect and handle failures at the application layer, so delivering a highly-available service on top of a cluster of computers, each of which may be prone to failures.Hadoop MapReduce is a software framework for easily writing applications which process vast amounts of data (multi-terabyte data-sets) in-parallel on large clusters (thousands of nodes) of commodity hardware in a reliable, fault-tolerant manner.A MapReduce job usually splits the input data-set into independent chunks which are processed by the map tasks in a completely parallel manner. The framework sorts the outputs of the maps, which are then input to the reduce tasks. Typically both the input and the output of the job are stored in a file-system. The framework takes care of scheduling tasks, monitoring them and re-executes the failed tasks.Typically the compute nodes and the storage nodes are the same, that is, the MapReduce framework and the Hadoop Distributed File System (see HDFS Architecture Guide) are running on the same set of nodes. This configuration allows the framework to effectively schedule tasks on the nodes where data is already present, resulting in very high aggregate bandwidth across the cluster.Course Consist of the Interview Question on the following TopicsSingle Node SetupCluster SetupCommands ReferenceFileSystem ShellCompatibility SpecificationInterface ClassificationFileSystem SpecificationCommonCLI Mini ClusterNative LibrariesHDFSArchitectureCommands ReferenceNameNode HA With QJMNameNode HA With NFSFederationViewFsSnapshotsEdits ViewerImage ViewerPermissions and HDFSQuotas and HDFSDisk BalancerUpgrade DomainDataNode AdminRouter FederationProvided StorageMapReduceDistributed Cache DeploySupport for YARN Shared CacheMapReduce REST APIsMR Application MasterMR History ServerYARNArchitectureCommands ReferenceResourceManager RestartResourceManager HANode LabelsNode AttributesWeb Application ProxyTimeline ServerTimeline Service V.2Writing YARN ApplicationsYARN Application SecurityNodeManagerUsing CGroupsYARN FederationShared CacheYARN UI2YARN REST APIsIntroductionResource ManagerNode ManagerTimeline ServerTimeline Service V.2YARN ServiceYarn Service APIHadoop StreamingHadoop ArchivesHadoop Archive LogsDistCpHadoop BenchmarkingReferenceChangelog and Release NotesConfigurationcore-default.xmlhdfs-default.xmlhdfs-rbf-default.xmlmapred-default.xmlyarn-default.xmlDeprecated Properties"
Price: 19.99

"Predictive Analytics with Apache Spark including Project"
"Predictive Analytics with Apache Spark using Databricks (Unofficial)  Notebook (Community edition)  including ProjectExplore Apache Spark and Machine Learning on the Databricks platform.Launching Spark ClusterCreate a Data PipelineProcess that data using a Machine Learning model (Spark ML Library)Hands-on learning using the example (Classification and Regression)Real-time Use CasePublish the Project on Web to Impress your recruiter Graphical Representation of Data using Databricks notebook.Transform structured data using SparkSQL and DataFramesAbout Databricks: Databricks lets you start writing Spark ML code instantly so you can focus on your data problems."
Price: 19.99

"Employee Attrition Prediction in Apache Spark (ML) Project"
"Spark Machine Learning Project (Employee Attrition Prediction) for beginners using Databricks Notebook (Unofficial) (Community edition Server) In this Data science Machine Learning project, we will create Employee Attrition Prediction Project using Decision Tree Classification algorithm one of the predictive models.Explore Apache Spark and Machine Learning on the Databricks platform.Launching Spark ClusterCreate a Data PipelineProcess that data using a Machine Learning model (Spark ML Library)Hands-on learningReal time Use Case Publish the Project on Web to Impress your recruiter Graphical Representation of Data using Databricks notebook.Transform structured data using SparkSQL and DataFramesEmployee Attrition Prediction a Real time Use Case on Apache SparkAbout Databricks: Databricks lets you start writing Spark ML code instantly so you can focus on your data problems."
Price: 19.99

"Spark Machine Learning Project (House Sale Price Prediction)"
"Spark Machine Learning Project (House Sale Price Prediction) for beginners using Databricks Notebook (Unofficial) (Community edition Server) In this Data science Machine Learning project, we will predict the sales prices in the Housing data set using LinearRegression one of the predictive models.Explore Apache Spark and Machine Learning on the Databricks platform.Launching Spark ClusterCreate a Data PipelineProcess that data using a Machine Learning model (Spark ML Library)Hands-on learningReal time Use Case Publish the Project on Web to Impress your recruiter Graphical Representation of Data using Databricks notebook.Transform structured data using SparkSQL and DataFramesPredict sales prices a Real time Use Case on Apache SparkAbout Databricks: Databricks lets you start writing Spark ML code instantly so you can focus on your data problems."
Price: 19.99

"Telecom Customer Churn Prediction in Apache Spark (ML)"
"Apache Spark Started as a research project at the University of California in 2009, Apache Spark is currently one of the most widely used analytics engines. No wonder: it can process data on an enormous scale, supports multiple coding languages (you can use Java, Scala, Python, R, and SQL) and runs on its own or in the cloud, as well as on other systems (e.g., Hadoop or Kubernetes).In this Apache Spark tutorial, I will introduce you to one of the most notable use cases of Apache Spark: machine learning. In less than two hours, we will go through every step of a machine learning project that will provide us with an accurate telecom customer churn prediction in the end. This is going to be a fully hands-on experience, so roll up your sleeves and prepare to give it your best!First and foremost, how does Apache Spark machine learning work?Before you learn Apache Spark, you need to know it comes with a few inbuilt libraries. One of them is called MLlib. To put it simply, it allows the Spark Core to perform machine learning tasks and (as you will see in this Apache Spark tutorial) does it in breathtaking speed. Due to its ability to handle significant amounts of data, Apache Spark is perfect for tasks related to machine learning, as it can ensure more accurate results when training algorithms.Mastering Apache Spark machine learning can also be a skill highly sought after by employers and headhunters: more and more companies get interested in applying machine learning solutions for business analytics, security, or customer service. Hence, this practical Apache Spark tutorial can become your first step towards a lucrative career!Learn Apache Spark by creating a project from A to Z yourself!I am a firm believer that the best way to learn is by doing. Thats why I havent included any purely theoretical lectures in this Apache Spark tutorial: you will learn everything on the way and be able to put it into practice straight away. Seeing the way each feature works will help you learn Apache Spark machine learning thoroughly by heart.I will also be providing some materials in ZIP archives. Make sure to download them at the beginning of the course, as you will not be able to continue with the project without it.And thats not all youre getting from this course can you believe it?Apart from Spark itself, I will also introduce you to Databricks a platform that simplifies handling and organizing data for Spark. Its been founded by the same team that initially started Spark, too. In this course, I will explain how to create an account on Databricks and use its Notebook feature for writing and organizing your code.After you finish my Apache Spark tutorial, you will have a fully functioning telecom customer churn prediction project. Take the course now, and have a much stronger grasp of machine learning and data analytics in just a few hours!Spark Machine Learning Project (Telecom Customer Churn Prediction) for beginners using Databricks Notebook (Unofficial) (Community edition Server) In this Data Science Machine Learning project, we will create Telecom Customer Churn Prediction Project using Classification Model Logistic Regression, Naive Bayes and One-vs-Rest classifier few of the predictive models.Explore Apache Spark and Machine Learning on the Databricks platform.Launching Spark ClusterCreate a Data PipelineProcess that data using a Machine Learning model (Spark ML Library)Hands-on learningReal time Use Case Publish the Project on Web to Impress your recruiter Graphical  Representation of Data using Databricks notebook.Transform structured data using SparkSQL and DataFramesTelecom Customer Churn Prediction a Real time Use Case on Apache SparkAbout Databricks: Databricks lets you start writing Spark ML code instantly so you can focus on your data problems."
Price: 19.99

"Heart Attack and Diabetes Prediction Project in Apache Spark"
"Apache Spark Project - Heart Attack and Diabetes Prediction Project in Apache Spark Machine Learning Project (2 mini-projects) for beginners using Databricks Notebook (Unofficial) (Community edition Server) In this Data science Machine Learning project, we will create 1) Heart Disease Prediction 2) Diabetes Predictionusing a few algorithms of the predictive models.Explore Apache Spark and Machine Learning on the Databricks platform.Launching Spark ClusterProcess that data using a Machine Learning model (Spark ML Library)Hands-on learningReal time Use Case Create a Data PipelinePublish the Project on Web to Impress your recruiter Graphical  Representation of Data using Databricks notebook.Transform structured data using SparkSQL and DataFramesData exploration using Apache Spark1) Heart Disease Prediction using Decision Tree Classification Model2) Diabetes Prediction using Logistic Regression Model and One-vs-Rest classifier (a.k.a. One-vs-All) Model A Real time Use Case on Apache SparkAbout Databricks: Databricks lets you start writing Spark ML code instantly so you can focus on your data problems."
Price: 19.99

"Apache Web Server Log Report Generation Project in Spark"
"In this course, you will learn to Analyze data (Apache Web Server log) in Apache Spark using Databricks Notebook (Community edition), 1) Basics flow of data in Apache Spark, loading data, and working with data, this course shows you how Apache Spark is perfect for Big Data Analysis job. 2) Data exploration about Apache Web Server Log using Apache Spark3) Learn the basics of Databricks notebook by enrolling into Free Community Edition Server 4) Apache Web Server logs Analytics a real-world example. 5) Graphical  Representation of Data using Databricks notebook.6) Transform structured data using SparkSQL and DataFrames7) Launching Spark Cluster8) Hands-on learning9) Real-time Use Case10) Publish the Project on Web to Impress your recruiter About Databricks: Databricks lets you start writing Spark queries instantly so you can focus on your data problems.Let's discover more about the Apache Web Server log Report generation Project for beginners using Apache SparkData:Data exploration about the recent history of the Apache Web Server log."
Price: 19.99

"Apache Spark MCQ Practice Test useful for Certification"
"Apache Spark Multiple Choice Question Practice Test for Certification (Unofficial)  Course is designed for Apache Spark Certification Enthusiast"" This is an Unofficial course and this course is not affiliated, licensed or trademarked with Any Spark Certification in any way.""Useful for CRT020: Databricks Certified Associate Developer for Apache Spark 2.4 with Scala 2.11 Assessment This course offers you practice tests comprising of Most Expected Questions for Exam practice, that mimics the actual certification exam, which will help you get prepared for the main exam environment.It will help you prepare for certification by providing sample questions.This will boost your confidence to appear for certification and also provides you with sample scenarios so that you are well equipped before appearing for the exam.Please Note: These questions are only for practice and understanding level of knowledge only. It is not necessary that these questions may or may not appear for examinations and/or interview questions."
Price: 19.99

"Apache Spark Project Predicting Customer Response in Banking"
"Predicting Customer Response to Bank Direct Telemarketing Campaign Project in Apache Spark Project (Machine Learning) for a beginner using Databricks Notebook (Unofficial)Telemarketing advertising campaigns are a billion-dollar effort and one of the central uses of the machine learning model. However, its data and methods are usually kept under lock and key. The Project is related to the direct marketing campaigns of a banking institution. The marketing campaigns were based on phone calls. Often, more than one contact to the same client was required, in order to access if the product (bank term deposit) would be ('yes') or not ('no') subscribed.In this Data Science Machine Learning project, we will create Predicting Customer Response to Bank Direct Telemarketing Campaign Project in Apache Spark Project (Machine Learning) using Classification Model, Logistic Regression, few of the predictive models.Explore Apache Spark and Machine Learning on the Databricks platform.Launching Spark ClusterCreate a Data PipelineA process that data using a Machine Learning model (Spark ML Library)Hands-on learningReal-time Use Case Publish the Project on Web to Impress your recruiter Predicting Customer Response to Bank Direct Telemarketing Campaign Project a Real-time Use Case on Apache SparkAbout Databricks: Databricks lets you start writing Spark ML code instantly so you can focus on your data problems."
Price: 19.99

"Spark Project (Prediction Online Shopper Purchase Intention)"
"Real-time Prediction of online shoppers purchasing intention Project using Apache Spark Machine Learning ModelsOnce a user logs into an online shopping website, knowing whether the person will make a purchase or not holds a massive economical value. A lot of current research is focused on real-time revenue predictors for these shopping websites. In this article, we will start building a revenue predictor for one such website. In this Data Science Machine Learning project, we will create a Real-time prediction of online shoppers purchasing intention Project using Apache Spark Machine Learning Models using Logistic Regression, one of the predictive models.Explore Apache Spark and Machine Learning on the Databricks platform.Launching Spark ClusterCreate a Data PipelineProcess that data using a Machine Learning model (Spark ML Library)Hands-on learningReal-time Use CasePublish the Project on Web to Impress your recruiter Prediction of Online Shoppers Purchasing Intention Project a Real-time Use Case on Apache SparkAbout Databricks: Databricks lets you start writing Spark ML code instantly so you can focus on your data problems."
Price: 19.99

"Apache Zeppelin - Big Data Visualization Tool"
"Apache Zeppelin - Big Data Visualization Tool for Big data Engineers An Open Source Tool (Free Source for Data Visualization)Master Bigdata Visualization with Apache Zeppelin.Various types of Interpreters to integrate with a various big data ecosystemApache Zeppelin provides a web-based notebook along with 20 plus Interpreters to interact with and facilitates collaboration from a WebUI. Zeppelin supports Data Ingestion, Data Discovery, Data Analysis, and Data Visualization.Using an integration of Interpreters is very simple and seamless. Resultant data can be exported or stored in various sources or can be explored with various visualization and can be analyzed with pivot graph like the setupThis course introduces every aspect of visualization, from story to numbers, to architecture, to code. Tell your story with charts on the web. Visualization always reflects the reality of the data.We will Learn: Data Ingestion in Zeppelin environmentConfiguring InterpreterHow to Use Zeppelin to Process Data in Spark Scala, Python, SQL and MySQLData DiscoveryData Analytics in ZeppelinData VisualizationPivot ChartDynamic FormsVarious types of Interpreters to integrate with a various big data ecosystemVisualization of results from big data"
Price: 19.99

"Apache Spark Project - eCommerce Customer Revenue Prediction"
"Apache Spark Project - eCommerce Customer Revenue Prediction - Spark Machine Learning Models This project is about eCommerce company that sells clothes online. This project is about customers who buy clothes online. The store offers in-store style and clothing advice sessions. Customers come into the store, have sessions/meetings with a personal stylist, then they can go home and order either on a mobile app or website for the clothes they want.We need to predict the future spending of Customer(ie Revenue for Company ) so business strategies can be made to convert ""Customer"" to ""Loyalty Customer"" In this Data Science Machine Learning project, we will create an eCommerce Customer Revenue Prediction Project using Apache Spark Machine Learning Models using Linear Regression, one of the predictive models.Explore Apache Spark and Machine Learning on the Databricks platform.Launching Spark ClusterCreate a Data PipelineProcess that data using a Machine Learning model (Spark ML Library)Hands-on learningReal-time Use CasePublish the Project on Web to Impress your recruiter eCommerce Customer Revenue Prediction Project a Real-time Use Case on Apache SparkAbout Databricks: Databricks lets you start writing Spark ML code instantly so you can focus on your data problems."
Price: 19.99

"Sports injury rehabilitation"
"This course shows the process the body goes through following an injury and the process of rehabilitation.  It begins with immediate first aid and follows the general process of rehab.  This includes range of motion, strengthening, including sport specific, proprioception and whole body training, including cardio training, flexibility and mental training.  The course is general in the process but does give injury specific examples of types of exercises in each area."
Price: 24.99

"The basics of concussions"
"This course will address what concussions are, how to identify them with both symptoms and the SCAT.  Students will learn what to do once you have a concussion and the recovery process with both return to learn and play.  Post concussive concerns will also be addressed and what steps you can take to help. "
Price: 24.99

"Python & Whatsapp Hacking & Batan sona python uygulamalari!"
"-Bu kurs gerek python dilinin temelini gerekse de python n ileri seviyede kullanlacak gerekli modullerini  ileri seviyede ve  gerek hayatta nelerin yapldn rahat ve elenceli bir ekilde akla kavusturulmas. -WHATSAPP BOT-Genel webdriver yaps-Kullanc arayzl uygulama gelitirme-Bu kursta python ile ilgili ilk bata temel ve kendinizi gelitirebileceiniz dzeyde giri dersi verdikten sonra orta ve ileri dzeyde ki konulara arlk verdim . Whatsapp bot yapma (bu ayn zamanda btn bot uygulamalarnn temeli olarakta saylabilir) bunun yansra gui yani kullanc arayz oluturmay ve  daha ayrntl ksmlara kendi arzunuzun istei lde aratrma ekillerini sizlerle paylatm.-Ayn zaman da python ile c++ , c# , java , swift gibi dillere hzlca adaptasyon.Python bilgisiyle hacking toollar yazabilme becerisi ve anlayabilme becerisi."
Price: 199.99

"Learn Turkish Online"
"During this course you will learn Turkish grammar in the easiest way. All the topics discussed here are enriched with examples and all subjects may be seen both in English and Turkish languages. Bu kurs boyunca Trke dil bilgisini en kolay biimde reneceksiniz. Ele alnan tm konular rnekler ile zenginletirilmi olup, tm konular hem ngilizce hem de Trke grlebilir."
Price: 59.99

"Google Analytics Mastery of Custom Dashboards"
"Boost Revenue, Traffic and own your metrics. Understand more with the key performance indicators. Opt-in NowGoogle Analytics gives you the tools you need to better understand your customers. You can then use those business insights to take action.Get to know your customers.Get a deeper understanding of your customers. Google Analytics gives you the free tools you need to analyze data for your business in one place.See whats in it for you.Build a complete picture.Understand your site and app users to better evaluate the performance of your marketing, content, products, and more.Get insights only Google can give.Access Googles unique insights and machine learning capabilities to help get the most out of your data.Connect your insights to results.Analytics is built to work with Googles advertising and publisher products so you can use your analytics insights to reach the right customers.Make your data work for you.Process and share your data quickly with an easy-to-use interface and shareable reports.Designed to work together.Easily access data from other solutions while working in Analytics, for a seamless workflow that saves you time and increases efficiency.Google AdsGain deeper insights into how users from your Google Ads campaigns engage with your site.Data StudioConnect Analytics with Data Studio to easily build performance dashboards and create customized reports."
Price: 199.99

"VLSI - Physical Design - 33 Hours of video"
"Module 1: Introduction to physical design automationModule 2 : Floorplanning and placemantModdule 3: RoutingModule 4: Static timing analysisModule 5 : Signal Integrity and crosstalk issuesModule 6: Clocking Issues and clock tree synthesisModule 7: Low Power design issuesModule 8:  Noise analysis and layout compactionModule 9 : Physical verification and sign offThere is 30 day money back guruntee. so you ahve nothing to lose."
Price: 199.99

"Computer Architecture Beginner to Advanced - 45 Hours of HD"
"So, let us distinguish between two kinds of terms that are used for the study of computers; the first is computer architecture, and the second is computer organization. So, computer architecture is a view of a computer; that is presented to  software designers, which essentially means, that it is an interface or a specification that the software designers see, it is a view that they see, of this is how the computer works and this is how they should write, software for it, whereas computer organization, is the actual implementation of a computer in hardware. Oftentimes, the terms computer architecture and computer organization, are actually confused or computer architecture is used for both. So, that is common, but we should keep in mind that there are two separate terms; one of them is computer architecture, and the other is computer organization. So, what again is a computer? We have computers everywhere, we have a computer on the desktop over here, we have a computer in a laptop or phone, an iPad. So, we can define a computer, as a general purpose device, that can be programmed, to process information, and yield meaningful results. So, mind you this definition has several facets to it; the first is, that a computer you should be able to program it. So, circuit that does a specific action is actually not a computer. So, for example, let us say that you have small thermometers on top of the room, which is showing what the current temperature is? That is not a computer even though you know is showing it is temperature on a nice screen. The reason is that this device cannot be programmed. Second, it needs to be able to process some information that is given from outside; like you enter some information via a keyboard or a mouse. It is processing  that information; it needs to yield meaningful results. So, all these three facets are important for defining what a computer is. So, how does the computer work? A computer has a program which tells the computer what needs to be done; that is because, the way that we have defined a computer, is that it should be possible to instruct it to do something. So, having a program is point number one. Second there needs to be some information that the program will work on. For example, let us say you are trying to you clicked the photograph, and the photograph has some red eyes. So, we are trying to remove the red eye effect in photographs. So, in this case the photograph will be the information stored, and the program will be the piece of code; that is working on the information, photographs in this example, and then the finished good looking photographs is the result. So, what is the program again, it is a list of instructions given to the computer. The information store is all the data image images, files, videos that a computer might process, and the computer once again is an intelligent device that can be instructed to do something, on the basis of the instructions, it processes some known information, to generate new and better and meaningful results.   So, let us take the lid off a desktop computer and see what is over there. So, if you take, if you open a desktop computer, the first thing that you see over here, is a green board, and this is called the motherboard. So, this is a circuit board for the rest of the circuit is r. The two most important circuits is that at least we are interested in at the moment, is the CPU the central processing unit, which is the main brain of the computer. This is the CPU. You would also see a small fan on top of it the job of the fan is to remove heat, and it is the other rectangle over here which is called the memory of the main memory. So, this temporarily stores the information that the CPU the processor is going to use, and the computer processor reads data from main memory processes it and writes it back. We also another very important unit over here which is the hard disk, this also saves information. What are the key differences between the memories in the hard disk; number one, the memory storage capacity is small. Maybe in todays day and age it might be like 32 gigabytes, whereas, the hard disk storage capacity might be 10 times more or 20 times more. So, so we will get into the definition of how much what is a kilobyte or megabyte or gigabyte in chapter two, but essentially it is a unit of storage. So, the hard disk can typically store 10 times more data, but it is also significantly slower. The other advantage of a hard disk is that if I turn the power off, all the data in the memory will get erased, whereas the data in the hard disk will remain. So, those are the differences. So, in this course, we will primarily be interested in these three unit is which  are the CPU the memory and the hard disk, but mind you there are many other smaller processors all over the motherboard. So, will not have a chance to talk to them, but we will have a chance to at least discuss some of them, in the last chapter, in chapter 12 not at the moment though. So, what does a simple computer look like? Simple computer, like the one that we looked at right now, has a computer a CPU, which does the processing; it has a memory and a hard disk. So, the hard disk maintains all the informations, when a computer is powered off. When it is powered on, some of the information comes to the memory, then the computer reads information from the memory, works on it, and again writes the results back.   What more do we need to add to make this a full functioning system. We need to add I O devices, input output devices. This can include a keyboard a mouse for entering information. And for displaying information, it can be a monitor or a printer, to display the kind of information that a computer has computed. It can be other media also; like the network or a USB port, but we will gradually see what these are, at the moment let us confine ourselves to a very- very simple system.  "
Price: 199.99

"Ethical Hacking from Top University Professor"
"I would like to welcome you to this course on Ethical Hacking. This is the first lecture of this course. Now, in this lecture, I will try to give you a very overall idea about what ethical hacking exactly is, what are the scopes of an ethical hacker and towards the end, I shall give you some idea about the coverage of this course what are the things we are expected to cover ok. So, the title of this lecture is Introduction to Ethical Hacking. Now, in this lecture as I told you, firstly we shall try to tell you what is ethical hacking? There is a related terminological penetration testing, we will also be discussing about that. And some of the roles of an ethical hacker, what an ethical hacker is expected to do and what he or she is not expected to do that we shall try to distinguish and discuss.  So, let us first start with the definition of ethical hacking. What exactly is ethical hacking? Well, we all have heard the term hacking and hacker essentially the term has been associated with something which is bad and malicious. Well, when we hear about somebody as a hacker, we are a little afraid and cautious ok. I mean as if the person is always trying to do some harm to somebody else to some other networks, try to steal something, trying to steal something from some IT infrastructure and so on and so forth. But ethical hacking is something different. Well, ethical hacking as per the definition if you just look at it, it essentially refers to locating the weaknesses and vulnerabilities. It means suppose you have a network, you have an organizational network, you have an IT, IT infrastructure, you have computers which contains some software, some data, lot of things are there. Now, you try a, I mean here you are trying to find out, whether your infrastructural network does have some weak points or vulnerabilities through which an actual hacker can break into your system, into your network. So, this ethical hacking is the act of locating weaknesses and vulnerabilities in computers and information system in general, it covers everything, it covers networks, it cover databases, everything. But how this is done, this is done by mimicking the behaviour of a real hacker as if you are a hacker, you are trying to break into your own network, there you will get lot of information about what are the weak points in your own network. So, this term is important, by replicating the intent and actions of malicious hackers, whatever malicious hackers do in reality, you try to mimic that, you try to replicate that ok. Your objective is to try and find out the vulnerabilities and weak points in your network. Well, you have a good intent, you try to identify the weaknesses and later on maybe the organization will be trying to plug out or stop those weaknesses, so that such attacks cannot occur or happen in the future ok. This ethical hacking is sometimes also referred to by some other names, penetration testing is a well-known terminology which is used a phrase, intrusion testing, red teaming, these are also terminologies which are used to mean the same thing. Well, you can understand penetration testing, the literal meaning of this phrase is, you are trying to penetrate into a system; you are trying to penetrate into a network, you are testing and find out whether or not you are able to penetrate. And if you are able to penetrate which are the points through which it is easier to penetrate, these are the objectives ok, all right.  So, talking about ethical hacking, there are some terminology, let us see. Well ethical hackers are the persons who are actually carrying out ethical hacking. Now, they are not some unknown entities, they are some organization or persons who are actually hired by the company. The company is paying them some money to do a penetration testing on their own network and provide them with a list of vulnerabilities, so that they can take some action later on ok. So, these ethical hackers are employed by companies who typically carry out penetration testing or ethical hacking. Penetration testing, as I had said is an attempt to break into a network or a system or an infrastructure. But the difference from malicious attempt is that this is a legal attempt. The company has permitted you to run the penetration testing on their own network for the purpose of finding the vulnerabilities. So, this is a legal attempt, you are trying to break in and you are trying to find out the weak links. Well, in penetration testing per se what the tester will do, tester will basically generate a report. The report will contain a detailed report; it will contain all the known vulnerabilities that have been detected in the network as a result of running the penetration testing process ok. But normally they do not provide solutions. Well, you can also seek solutions for them, but everything comes with an extra or additional charge right. So, in contrast, security test is another terminology which is used, which includes penetration test plus this kind of suggestions to plug out the loopholes. So, this includes in addition analyzing the company security policies and offering solutions, because ultimately the company will try to secure or protect their network. Of course, there are issues, there may be some limited budget. So, within that budget whatever best is possible that have to be taken care of or incorporated. So, these are some decisions the company administration will have to take fine.    So, some of the terminologies that we normally use hacking, hacking broadly speaking, we use this term to refer to a process which involves some expertise. We expect the hackers to be expert in what they are doing. At times we also assume that hackers are more intelligent in the persons, than the persons who are trying to protect the network. This assumption is always safe to make that will make your network security better ok. Cracking means breaching the security of a some kind of system, it can be software, it can be hardware, computers, networks whatever, this is called cracking, you are trying to crack a system. Spoofing is a kind of attack, where the person who is, who is attacking is trying to falsify his or her identity. Suppose, I am trying to enter the system, but I am not telling who I am, I am telling I am Mr. X, Mr. X is somebody else right. So, it is the process of faking the originating address in a packet, a packet that flows in a network is sometimes called a datagram ok. So, the address will not be my address, I will be changing the address to somebody elses address, so that the person who will be detecting that will believe that someone else is trying to do whatever is being done ok. Denial of service is another very important kind of an attack which often plagues or affects systems or infrastructures. Well, here the idea is that one or a collection of computers or routers or whatever you can say, a collection of nodes in the network, they can flood a particular computer or host with enormous amount of network traffic. The idea is very simple, suppose I want to bring a particular server down, I will try to flood it with millions and millions of packets, junk packets, so that the server will spend all of its time filtering out those junk packets. So, whenever some legitimate requests are coming, valid packets are coming, they will find that the service time is exceedingly slow, exceedingly long, this is something which is called denial of service. And port scanning is a terminology which you use very frequently, well ports in a computer system this we shall be discussing later. Ports indicate some entry points in the system which connects the incoming connections to some programs or processes running in the system. Say means in a computer system there can be multiple programs that are running, and these programs can be associated with something called a port number ok. Whenever you are trying to attack a system, normally the first step is to scan through some dummy packets ping, these are called ping packets and try to find out which of the port numbers in the system are active. Suppose, you find out that there are four ports which are active then normally there is a well documented hacking guideline which tells you that for these four ports what are the known vulnerabilities and what are the best ways to attack or get entering those into the system through these ports. So, this port scanning is the process of identifying which are the active ports which are there and then searching for the corresponding vulnerabilities, so that you can exploit them ok. These are called exploits, once you identify the ports you try to find out an exploit through which you can get entry into the system, this is roughly the idea.  Now, talking about gaining access into the system, there are different ways in which you can gain access to a system. One is you are entering the system through the front door. So, the name is also given front door access. Normally, a system, normally I am talking about whenever you try to access the system you try to log in, you are validated with respect to some password or something similar to that.  So, passwords are the most common ways of gaining entry or access to a system in the present day scenario ok. So, the first attempt through that front door channel will be to guess valid password or try and steal some password. There are many methods that are used for this purpose. During this course you will be seeing some of the tools through which you can try and do this ok. This is the front door. The second thing is a back door which normally a person coming is not able to see, but it is there. Those of you who know there is a back door, they can only enter through that back door. This is the basic idea. So, back doors are some you can say entry points to a system which had deliberately kept by the developers. Well, I am giving an example suppose I buy a router, a network router from some company, they give me some root password and access rights, I change the root password. So, I am quite happy that means, I have sole access to it, I have changed the password, I am safe. But sometimes it may happen if something goes down, the company might automatically modify or configure, reconfigure the router through that back door. They will not even ask you at times. They will automatically enter the router through that backdoor entry, there will be some special password through which they can possibly enter and they can make some changes inside. Such back doors are known to exist in many systems, not only hardware systems also many of these software systems, software packages ok. Well, usually developers keep it as debugging or diagnostic tools, but sometimes these are also used for malicious purposes ok. Then comes the Trojan horses. Now, if you remember the story of the Trojan horse where it is something which was hidden inside a horse, some warriors were hidden inside a horse. Suddenly some time one night, they just comes out and start creating havoc. Trojan horse is also in terms of a computer system something very similar. Here let us think of a software first. So, it is a software code that is hidden inside a larger software. Well, as a user you are not even aware that such a Trojan is there inside the software ok. Now, what happens sometimes that Trojan software can start running and can do lot of malicious things in your system. For example, they can install some back doors through which other persons or other packets can gain entry into your system. Nowadays, you will also learn as part of the course later, Trojans can also exists in hardware. Whenever you built a chip, you fabricate a chip, without your knowledge, some additional circuitry can get fabricated which can allow unauthorized access or use of your chip, of your system during its actual runtime ok. And lastly come software vulnerabilities exploitation. Well, when a software is developed by a company, that software is sold, with time some vulnerabilities might get detected. Normally, those vulnerabilities are published in the website of that company that well, these are the vulnerabilities please install this patch to stop or overcome that vulnerability. But everyone do not see that message and do not install the patch. But as a hacker if you go there and see that well these are the vulnerabilities in that software, you try to find out where all that software is installed and you try to break into those in using those vulnerable points ok. And this kind of software vulnerabilities are typically used, you can say as a playground for the first time hackers. Sometimes they are called script kiddies. The hackers who are just learning how to hack and that is the best place means already in some website it is mentioned that these are the vulnerabilities, they just try to hack and see that whether they are able to do it or not all right. Now, once a hacker gains access inside a system, there can be a number of things that can be done. For example, every system usually has a log which monitors that who is logging into the system at what time, what commands they are running and so on and so forth. So, if the hacker gets into the system, the first thing he or she will possibly try to do is modify the log, so that their tracks are erased. So, if the system administrator looks at the log later on, they will not understand that well an hacking actually happened or not. So, some entries in the log file can get deleted; can be deleted, some files may be stolen, sometimes after stealing the files, files can be destroyed also ok, some files might get modified, like you have heard of defacement of websites, some hackers break into a website and change the contents of the page to something malicious, so that people know that well we came here, we hacked your system, just to cause mischief well. Installing backdoors is more dangerous. So, you will not understand what has happened, but someone has opened a back door through which anyone can enter into a system whenever they want ok. And from your system, some other systems can be attacked. Suppose in a network, there are 100 computers, someone gains entry into one of the systems, one of the computers; from there the other 99 computers can be attacked if they want to, right, ok. Now, talking about the roles of the testers, who are carrying out the security testing and penetration testing. Well, I talked about script kiddies, the beginners who have just learned how to break into systems. They are typically young or inexperienced hackers. So, usually what they do, they look at some existing websites, lot of such hacking documentations are there, from there they typically copy codes, run them on the system and see that whether actually the attacks are happening as it has been published or discussed in those websites, right. But experienced penetration testers they do not copy codes from such other places, they usually develop scripts, they use a set of tools and they run a set of scripts using which they run those tools in some specific ways to carry out specific things. And these tools or these scripts are typically written in different scripting language like Perl, Python, JavaScript, they can be written also in language like C, C++ and so on. (Refer Slide Time: 21:30) Now, broadly the penetration testing methodologies if you think about, first thing is that the person who is doing penetration testing, he or she must have all the set of tools at his or her disposal. This is sometimes called a tiger box. Tiger box basically is a collection of operating systems and hacking tools which typically is installed in a portable system like a laptop, from there wherever the person wants to carry out penetration testing, he or she can run the correct tool from there and try to mount a virtual attack on that system, and see whether there are any vulnerabilities or not. So, this kind of tools helps penetration testers and security tester to conduct vulnerability assessment and attacks. This tiger box contains a set of all useful tools that are required for that ok. Now, for doing this penetration testing, from the point of view of the tester, the best thing is white box model. Where the company on whose behalf you are doing the testing tells the tester everything about the network and the network infrastructure, they provide you with a circuit diagram with all the details ok, means about the network topology, what kind of new technologies are used in the network everything.  And also the tester if they require, whenever they require, they are authorized to interview the IT personnel. Many times it is required in a company, if you interview people, you will get to know a lot of things that how the information processing is carried out inside the company, what are the possible vulnerabilities that they feel there are ok. So, this white box model makes the testers job a lot easier, because all the information about the network whatever is available is made available or given to the tester ok. Now, the exact reverse is the black box model. Black box model says that tester is not given details about the network. So, it is not that the person who is asking the tester to test, is deliberately not giving, maybe the person is not competent enough and does not know the relevant information to be shared with the tester. So, tester will have to dig into the environment and find out whatever relevant information is required. So, the burden is on the tester to find out all the details that may be required. In practice usually we have something in between, we do not have white box, we do not also have black box, we have something called the gray box model. What is grey box model? It is some kind of a hybrid of the white box and black box model. The company will provide the tester with partial information about the network and the other things. Well, why partial? Because the company may be knowing the details of some of the subsystems, but for some other subsystem the details may not be available to them also. So, they cannot provide any detail for that ok. They have just bought it and installed it something like that. So, these are broadly the approaches.    Now, there are some legal issues also. Well, it varies from country to country. Well, in our country it is not that rigid, there are some other countries where it is extremely rigid, that means you are not possibly allowed to install some kind of software on your computers. So, these laws that involve technologies, particularly IT, they are changing and developing very fast with time. It is very difficult to keep track of these changes, what is the latest law of the land ok.  Now, it is always good to know the exact set of rules that pertain in the place of your work, where you are working, what are the laws, what are the rules, so that you should be know what is allowed and what is not allowed, maybe you are using something or doing something in good faith, but possibly it is illegal in that state or that country ok, may be, you may be in trouble later on, all right.  So, the laws of the land are very important to know. Some of the tools you are using on your computer may be illegal in that country. So, you must be know about these things. The cyber crimes, punishment on cyber crime, these are becoming more and more crucial and severe with every passing day. So, these are a few things people should be extremely cautious about. But certain things are quite obvious that you should not do certain things legally that everyone understands that accessing a computer without permission is clear. So, it is my computer, why you are you accessing without my permission that is something illegal. Installing worms or viruses that is also supposed to be illegal, I have not installed worms and viruses, so I have also not asked you to install. So, why have you installed or injected these kind of worms or viruses in my computer ok. Denial of service attacks, well hackers do mount this kind of attacks, but these are illegal, some services or servers are installed to provide some service to customers. So, if someone tries to deny those services that is something which is not permissible right. Then something similar to that denying users access to some networking resources, because you should be aware whatever you are doing maybe as part of ethical hacking, maybe as part of the work which company has asked you to do. Maybe you are doing something inside your, the network of the company, but you should be careful, you should not prevent the customers of that company from doing their job, this is very important ok. So, your action should not be disruptive in terms of their business.  So, in a nutshell to summarize, this ethical hacking well if you are a security tester, so what are the things you need to know or you need to do? Well, the first thing clearly is, you should have a sound knowledge of networking and computer technology. So, you see as part of this course, we will devote a significant amount of time discussing or brushing up the relevant backgrounds of networking technology, because these are very important in actually understanding what you are doing, how are you doing and why are you doing. And also you cannot do everything yourself on your own, you need to communicate with other people that art is also something to be mastered. You need to interact with other people. This quality is also very important. And of course, I have mentioned the laws of the land are very important to understand and you should have the necessary tools at your disposal. Some of the tools may be freely available; some of the tools may have to be purchased, some you may develop on your own. So, you should have the entire set of tools at your disposal before you can qualify yourself to be a good network, you can say ethical hacker, penetration tester or a security tester ok, fine. Now, about this course very briefly speaking, very broadly speaking, we shall be covering relevant network technologies as I had said, understanding some basic networking concepts are very important to understand how these tools work. If you do not understand the networking concepts, we will not be able to use the tools at all ok.  Basic cryptographic concepts are required, because whenever you are trying to stop some of the weak points or vulnerabilities, often you will have to use some kind of cryptographic techniques or cryptographic solutions. So, you need to understand what are the things that are possible and what are not possible in terms of cryptography techniques ok. Well, we shall look at some of the case studies of secure applications to understand how these cryptographic primitives are put into practice to develop secure applications. Then we shall be looking at unconventional attacks, some of the attacks which are hardware based attacks, which are very interesting and very recent and they are very unconventional. We shall be discussing about such kind of attacks. And a significant part of this course, we will concentrate on demonstrating various tools, how we can actually mount this kind of penetration testing and other kind of attacks on your system, on your network and so on and so forth ok. So, with this I come to the end of this first lecture. And I would expect that the lectures that are yet to come would be very useful for you in understanding the broad subject of ethical hacking and motivate you in the subject to possibly become an ethical hacker in the future."
Price: 19.99

"Machine Learning Masterclass"
"Hello everyone and welcome to this  course on an introduction to machine learning in this course we will have a quick introduction to machine learning and this will not be very deep in a mathematical sense but it will have some amount of mathematical trigger and what we will be doing in this course is covering different paradigms of machine learning and with special emphasis on classification and regression tasks and also will introduce you to various other machine learning paradigms. In this introductory lecture set of lectures I will give a very quick overview of the different kinds of machine learning paradigms and therefore I call this lectures machine learning. ) A brief introduction with emphasis on brief right, so the rest of the course would be a more elongated introduction to machine learning right. So what is machine learning so I will start off with a canonical definition put out by Tom Mitchell in 97 and so a machine or an agent I deliberately leave the beginning undefined because you could also apply this to non machines like biological agents so an agent is said to learn from experience with respect to some class of tasks right and the performance measure P if the learners performance tasks in the class as measured by P improves with experience. So what we get from this first thing is we have to define learning with respect to a specific class of tasks right it could be answering exams in a particular subject right or it could be diagnosing patients of a specific illness right. So but we have to be very careful about defining the set of tasks on which we are going to define this learning right, and the second thing we need is of a performance measure P right so in the absence of a performance measure P you would start to make vague statement like oh I think something is happening right that seems to be a change and something learned is there is some learning going on and stuff like that. So if you want to be clearer about measuring whether learning is happening or not you first need to define some kind of performance criteria right. So for example if you talk about answering questions in an exam your performance criterion could very well be the number of marks that you get or if you talk about diagnosing illness then your performance measure would be the number of patients that you say are the number of patients who did not have adverse reaction to the drugs you gave them there could be variety of ways of defining performance measures depending on what you are looking for right and the third important component here is experience right. So with experience the performance has to improve right and so what we mean by experience here in the case of writing exams it could be writing more exams right so the more the number of exams you write the better you write it better you get it test taking or it could be a patient's in the case of diagnosing illnesses like the more patients that you look at the better you become at diagnosing illness right. So these are the three components so you need a class of tasks you need a performance measure and you need some well-defined experience so this kind of learning right where you are learning to improve your performance based on experience is known as a this kind of learning where you are trying to where you learn to improve your performance with experience is known as inductive learning. And then the basis of inductive learning goes back several centuries people have been debating about inductive learning for hundreds of years now and are only more recently we have started to have more quantified mechanisms of learning right. So but one thing I always point out to people is that if you take this definition with a pinch of salt, so for example you could think about the task as fitting your foot comfortably right. So you could talk about whether a slipper fits your foot comfortably or let me put so I always say that you should take this definition with a pinch of salt because take the example of a slipper you know, so the slipper is supposed to give protection to your foot right and a performance measure for the slipper would be whether it is fitting the leg comfortably or not or whether it is you know as people say there is biting your leg or is it Chaffin your feet right and with experience you know as the slipper knows more and more about your foot as you keep varying the slipper for longer periods of time it becomes better at the task of fitting your foot right as measured by whether it is shattering your foot or whether it is biting your foot or not right. So would you say that the slipper is learned to fit to your foot well by this definition yes right so we have to take this with a pinch of salt and so not every system that confirms to this definition of learning can be set to learn usually okay. (Refer Slide Time: 06:11) So going on so there are different machine learning paradigms that we will talk about and the first one is supervised learning where you learn an input to output map right so you are given some kind of an input it could be a description of the patient who comes to comes to the clinic and the output that have to produce is whether the patient has a certain disease or not so this they had to learn this kind of an input to output map or the input could be some kind of equation right and then output would be the answer to the question or it could be a true or false question I give you a description of the question you have to give me true or false as the output. And in supervised learning what you essentially do is on a mapping from this input to the required output right if the output that you are looking for happens to be a categorical output like whether he has a disease or does not have a disease or whether the answer is true or false then the supervised learning problem is called the classification problem right and if the output happens to be a continuous value like, so how long will this product last before it fails right or what is the expected rainfall tomorrow right so those kinds of problems they would be called as regression problems. These are supervised learning problems where the output is a continuous value and these are called as regression problems. So we will look at in more detail classification and regression as we go on right, so the second class of problems are known as unsupervised learning problems right where the goal is not really to produce an output in response to an input but given a set of in data right we have to discover patterns in the data right. So that is more of the testicle unsupervised learning there is no real desired output that we are looking for right we are more interested in finding patterns in the data. So clustering right is one task one unsupervised learning task where you are interested in finding cohesive groups among the input pattern right, for example I might be looking at customers who come to my shop right and I want to figure out if there are categories of customers like so maybe college students could be one category and sewing IT professionals could be another category and so on so forth and when I'm looking at this kinds of grouping in my data, so I would call that a clustering task right. So the other popular unsupervised learning paradigm is known as the Association rule mining or frequent pattern mining where you are interested in finding a frequent co-occurrence of items right in the data that is given to you so whenever A comes to my shop B also comes to my shop right. So those kinds of co-occurrence so I can always say that okay if I see A then there is likely very likely that B is also in my shop somewhere you know so I can learn these kinds of associations between data right. And again we look at this later in more detail these are I mean there are many different variants on supervised and unsupervised learning but these are the main ones that we look at so the third form of learning which is called reinforcement learning it is neither supervised or unsupervised in nature and typically these are problems where you are learning to control the behavior of a system and I will give you more intuition intone enforcement learning now in one of the later modules, so like I said earlier. (Refer Slide Time: 09:33) So for every task right, so you need to have some kind of a performance measure so if you are looking at classification the performance measure is going to be classification error so typically right. So we will talk about many, many different performance measures in the duration of this course but the typical performance measure you would want to use this classification error it's how many of the items or how many of the patients did I get incorrect so how many of them who are not having the disease today predict had the disease and how many of them that had the disease that I missed right. So that would be one of the measures that I would use and that would be the measure that we want to use but we will see later that often that is not is not possible to actually learn directly with respect to this measure. So we use other forms right and likewise for regression again so we have the prediction error suppose I say it is going to rain like 23 millimeters and then it ends up raining like 49centimeters I do not know so that is a huge prediction error right and in terms of clustering so this is little becomes a little trickier to define performance measures we don't know what is a good clustering algorithm because we do not know what how to measure the quality of clusters. So people come up with all different kinds of measures and so one of the more popular ones is a scatter or spread of the cluster that essentially tells you how spread out the points are that belong to a single group if you remember we are supposed to find cohesive groups, so if the group is not that cohesive it's not all of them are not together then you would say the clustering is of a poorer quality and if you have other ways of measuring things like Alec was telling you, so if you know that people are college students right and then you can figure out that how many what fraction of your cluster or college students. So you can do this kinds of external evaluations so one measure that people use popularly there is known as purity right and in the Association rule mining we use variety of measures called support and confidence that takes a little bit of work to explain support in confidence so I will defer it and I talked about Association rules in detail and in more in the reinforcement learning tasks so if we remember I told you it is learning to control so you are going to have a cost for controlling the system and also the measure here is cost and you would like to minimize the cost that you are going to accrue while controlling the system. So these are the basic machine learning tasks. (Refer Slide Time: 12:11) So there are several challenges when you are trying to build a build a machine learning solution right so a few of these I have listed on this slide right the first one is you have to think about how good is a model that you have learned right so I talked about a few measures on the previous slide but often those are not sufficient there are other practical considerations that come into play and we will look at some of these towards thee there was a middle of the course somewhere right and the bulk of the time would be spent on answering the second question which is how do I choose a model right. So given some kind of data which will be the experience that we are talking about so given this experience how would I choose how would I choose a model right that somehow learns what I want to do right so how that improves itself with experience and so on so how do I choose this model and how do I actually find the parameters of the model that gives me the right answer right. So this is what we will spend much of our time on in this course and then there are a whole bunch of other things that you really have to answer to be able to build a useful machine loose full data analytics or data mining solutions questions like do I have enough data do I have enough experience to say that my model is good right it's the data efficient quality that could be errors in the data right suppose I have medical data and a is recorded as 225, so what does that mean it could be 225 days in which case it is a reasonable number it could be 22.5 years again is a reasonable number or 22.5 months is reasonable. But if it is 225 years it's not a reasonable number so there is something wrong in the data right so how do you handle these things or noise in images right or missing values so I will talk briefly about handling missing values later in the course but this is as I mentioned in the beginning is a machine learning course right and this is not there is not primarily it is primarily concerned about the algorithms of machine learning and the and the math and the intuition behind those and not necessarily about the questions of building a practical systems based on this. So I will be talking about many of these issues during the course but just that I want to reiterate that will not be the focus right and so the next challenge I have listed here is how confident can I be of the results and I want that I certainly we will talk a little bit because the whole premise of reporting machine learning results depends on how confident you can be of the results right and the last question am I describing the data correctly. So that is a very, very domain dependent and the question that you can answer only with your experience as a machine learning or a data scientist professional or with time right, so but there are typical questions that you would like to ask that are there on the slides so from the next in the next module we look at the different learning paradigms in slightly more detail.If you remember in supervised learning we talked about experience right where you have some kind of a description of the data. So in this case let us assume that I have a customer database and I am describing that by two attributes here, age and income. So I have each customer that comes to my shop I know the age of the customer and the income level of the customers right. (Refer Slide Time: 00:48) And my goal is to predict whether the customer will buy a computer or not buy a computer right. So I have this kind of labeled data that is given to me for building a classifier right, remember we talked about classification where the output is a discrete value in this case it is yes or no, yes this is the person will buy a computer, no the person will not buy a computer. And the way I describe the input is through a set of attributes in this case we are looking at age and income as the attributes that describe the customer right. And so now the goal is to come up with a function right, come up with a mapping that will take the age and income as the input and it will give you an output that says the person will buy the computer or not buy the computer. So there are many different ways in which you can create this function and given that we are actually looking at a geometric interpretation of the data, I am looking at data as points in space. (Refer Slide Time: 01:57) The one of the most natural ways of thinking about defining this function is by drawing lines or curves on the input space right. So here is one possible example, so here I have drawn a line and everything to the left of the line right. So these are points that are red right, so everything to the left of the line would be classified as will not buy a computer, everything to the right of the line where the predominantly the data points are blue will be classified as will buy a computer. So how would the function look like, it will look like something like if the income of a person remember that the x-axis is income and the y-axis is age. So in this case it basically says that if the income of the person is less than some value right, less than some X then the person will not buy a computer. If the income is greater than X the person will buy your computer. So that is the kind of a simple function that we will define. It will just notice that way we completely ignore one of the variables here which is the age. So we are just going by income, if the income is less than some X then the person will not buy a computer, if the income is greater than X the person will buy a computer. So is this a good rule more or less I mean we get most of the points correct right except a few right. So it looks like yeah, we can we can survive with this rule right. So this is not too bad right, but then you can do slightly better. (Refer Slide Time: 03:29) All right, so now we got those two red points that those just keep that points are on the wrong side of the line earlier. Now seem to be on the right side right, so everything to the left of this line will not buy a computer, everything to the right will buy a computer right, everyone moves to the right will buy a computer. So if you think about what has happened here, so we have improved our performance measure right. So the cost of something, so what is the cost here. So earlier we are only paying attention to the income right, but now we have to pay attention to the age as well right. So the older you are right, so the income threshold at which we will buy a computer is higher right. So the younger you are, younger means lower on the y axis, so the younger you are the income threshold at which you will buy a computer is lower right. So is that clear, so the older you are right, so the income threshold is shifted to the right here right so the older you are, so you need to have a higher income before you buy a computer and the anger you are your income threshold is lower, so you do not mind buying a computer even if your income is slightly lesser right. So now we have to start paying attention to the age right, but then the advantage is you get much better performance right can you do better than this yes okay. (Refer Slide Time: 04:54) Now almost everything is correct except that one pesky red point, but everything else is correct. And so what has happened here we get much better performance, but at the cost of having a more complex classifier right. So earlier if you thought about it in geometric terms, so first you had a line that was parallel to the y-axis therefore, I just needed to define a intercept on the x-axis right. So if X is less than some value then it was one class was greater than some value was another class. Then the second function it was actually a slighting line like that, so I needed to define both the intercept and the slope right. And now here it is now a quadratic so I have to define three parameters right. So I have to define something like ax2+ bx+c, so I have defined the ABC the three parameters in order to find the quadratic, and I am getting better performance. So can you do better than this. (Refer Slide Time: 05:57) Okay the sum for does not seem right correct seems to be too complex a function just to be getting this one point there right. And I am not sure I am not even sure how many parameters you need for drawing that because Microsoft use some kind of spline PowerPoint use some kind of spline interpolation to draw this curve I am pretty sure that it is lot, lot more parameters than it is worth another thing to note here is that that particular red point that you see is actually surrounded by a sea of blue right. So it is quite likely that there was some glitch there either the person actually bought a computer and we never we have not recorded it has been having what computer or there are some extremist reason the person comes into the shop sure that is going to buy a computer but then gets a phone call saying that some emergency please come out immediately and therefore he left without buying a computer right there could be variety of reasons for why that noise occurred and this will probably be the more appropriate classifier right. So these are the kinds of issues I would like to think about what is the complexity of the classifier that I would like to have right and versus the accuracy of the classifier, so how good is the classifier in actually recovering the right input output map and or their noise data in the in the input in the experience that I am getting is it clean or is there noise on it and if so how do I handle that noise these are the kinds of issues that we have to look at okay. (Refer Slide Time: 07:31) So these kinds of lines that we drew right kind of hiding one assumption that we are making so the thing is the data that comes to me comes as discrete points in the space right and from these discrete points in the space I need to generalize and be able to say something about the entire state space right so I do not care where the data point is on the x and y-axis right I should be able to give a label to that right. If I do not have some kind of assumption about these lines right and if you do not have some kind of assumptions about these lines the only thing I can do is if the same customer comes again hey or somebody who has exact same age and income as that cause customer comes again I can tell you whether the person is going to buy a computer or not buy a computer but I will not be able to tell you about anything else outside of the experience right. So the assumption we made is everything to the left of a line is going to do one thing or the other right so everything to the left of the line will not buy the computer everything to the right or everyone to the right will buy a computer this is an assumption I made the assumption was the Lions are able to segregate people who buy from who do not buy the lines or the curves were able to segregate people who will buy from who will not buy so that is a kind of an assumption I made about the distribution of the input data and the class labels. So this kind of assumptions that we make about these lines are known as inductive biases in general inductive bias has like two different categories one is called language bias which is essentially the type of lines that I am going to draw my gonna draw straight lines or am I going to draw curves and what order polynomials am I going to look at and so on so forth these for my language bias and such bias is the other form of inductive bias that tells me how in what order am I going to examine all these possible lines right. So that gives me the gives me a search bias right, so putting these two these things together we are able to generalize from a few training points to the entire space of inputs right I will make this more formal as we go on and then in the next night set of modules right. (Refer Slide Time: 10:01) And so here is one way of looking at the whole process so I am going to be giving you a set of data which we will call the training set so the training set will be will consists of say as an input which we'll call as X and an output which we call as Y right, so I am going to have a set of inputs I have X1, X2, X3, X4 likewise I will have Y1, Y2, Y3, Y4 and this data is fed into a training this data is fed into a training algorithm right and so the data is going to look like this in our case right. So remember our Xs are the input variable success all the inputs so in this case that should have the income and the age, so x1 is like 30,000 and 25 and x2 is like 80,000 and 45 and so on so forth and the Y's or the labels they correspond to the colors in the previous picture right so y1 does not buy a computer Y2 buys a computer and so on so forth so this essentially gives me the color coding so y1 is essentially red and y2 is blue right and I really if I am going to use something numeric this is what we will be doing later on I really cannot be using these values first of all wise or not numeric and the X is varied too much right. So the first coordinate in the X is like 30,000 and 80,000 and so on so forth and the second coordinate is like 25 and 45 so that is a lot a lot smaller in magnitude so this will lead to some kind of numerical instabilities, so what will typically end up doing is normalizing these so that they form appropriate approximately in the same range so you can see that I have try to normalize these X values between 0 and 1 right. So have chosen an income level of say 2 lakhs it is the maximum and age of 100 and you can see the normalized values and likewise for buys and not buy I have taken not by as - 1 and by as computer is + 1these are arbitrary choices, now but later on you will see that there are specific reasons for wanting to choose this encoding in this way alright and then the training algorithm chugs over this data right and it will produce a classifier so now this classifier I do not know I do not know whether it is good or bad right so we had a straight line in the first case right an axis parallel line if we did not know the good or bad and we needed to have some mechanism by which we evaluate this right. So how do we do the evaluation typically is that you have what is called a test set or a validation set right so this is another set of x and y paths like we had in the training set, so again in the test set we know what the labels are it is just that we are not showing it to the training algorithm we know what the labels are because we need to use the correct labels to evaluate whether your trading algorithm is doing good or bad right so, so this process by which this evaluation happens is called validation later then of the validation. If you are happy with the quality of the classifier we can keep it if you are not happy they go back to the training algorithm and say hey I am not happy with what you produced give me something different right, so we have to either iterate over the algorithm again we will go over the data again and try to refine the parameter estimation or we could even think of changing some parameter values and then trying to redo the training algorithm all over again but this is the general process and we will see that many of the different algorithms that we look, look at in the course of fitting the course of these lectures actually follow this kind of a process okay so what happens inside that green box. (Refer Slide Time: 13:48) So inside the training algorithm is that there will be this learning agent right which will take an input and it will produce an output white at which it thinks is the correct output right but it will compare it against the actual target why it was given for the in the training right, so in the training you actually have a target why so it will compare it against a target why right and then figure out what the error is and use the error to change the agent right so then it can produce the right output next time around this is essentially an iterative process so you see that input okay produce an output Y hat and then you take the target Y. You can compare it to the Y hat figure out what is the error and use the error to change the agent again right and this is by and large the way most of the learning all algorithms will operate most of the classification algorithms or even regression algorithms will open it and we will see how each of this works as, we go on right there are many, many applications. (Refer Slide Time: 14:46) I mean this is too numerous to list here are a few examples you could look at say a fraud detection right, so we have some data where the input is a set of transactions made by a user and then you can flag each transaction as a valid transaction or not you could look at sentiment analysis you know varied Lee called opinion mining or buzz analysis etc. Where I give you a piece of text or a review written about and a product or a movie and then you tell me whether the movies whether the review is positive or whether is negative and what are the negative points that people are mentioning about and so on so forth and. This again a classification task or you could use it for doing churn prediction where you are going to say whether a customer who is in the system is likely to leave your system is going to continue using your product or using your service for a longer period of time, so this is essentially churn so when a person leaves your services you call the person earner and you can label what the person is Channel or not and I have been giving you examples form medical diagnosis all through apart from actually diagnosing whether a person has the disease or not you could also use it for risk analysis in the slightly indirect way I talked about that when we when we do the algorithms for classification. So we talked about how we are interested in learning different lines or curves that can separate different classes in supervised learning and, so this curves can be represented using different structures and throughout the course we will be looking at different kinds of learning mechanisms like artificial neural networks support vector machines decision trees nearest neighbors and Bayesian networks and these are some of the popular ones and we look at these in more detail as the course progresses so another supervised learning problem is the one of prediction. (Refer Slide Time: 16:45) Or regression where the output that you are going to predict is no longer a discrete value it is not like we will buy a computer whereas not buy a computer it is more of a continuous value so here is an example, where at different times of day you have recorded the temperature so the input to the system is going to be the time of day and the output from the system is going to be the temperature that was measured at a particular point at the time right so you are going to get your experience or your training data is going to take this form so the blue points would be your input and the red points would be the outputs that you are expected to predict. So note here that the outputs are continuous or real value right and so you could think of this in this toy example as points to the left being day and the points to the right being night right and just as in the previous case of classification, so we could try to do these simple as possible fit in this case which would be to draw a straight line that is as close as possible to these points now you do see that like in the classification case when it choose a simple solution there are certain points at which we are making large errors right so we could try to fix that. And try to do something more fancy but you can see that while the daytime temperatures are more or less fine with the night times we seem to be doing something really off right because we are going off too much to thee the right-hand side all right how are you could do something more complex just like in the classification case where we wanted to get that one point right so we could try and fit all these temperatures that were given to us by looking at a sufficiently complex curve. And again this as we discussed earlier is probably not the right answer and you are probably in this case surprisingly or better off fitting the straight line right and so these kinds of solutions where we trying to fit the noise in the data we are trying to make the solution predict the noise in the training data correctly are known as over fitting over fit solutions and one of the things that we look to avoid in, in machine learning is to over fit to the training data. (Refer Slide Time: 19:21) So we will talk about this again and then new course right and so what we do is typically we would like to do what is called linear regression some of you might have come across this and of different circumstances and the typical aim in linear regression is to say take the error that your line is making so if you take an example point let us say I take any let us say I take an example point somewhere here right. So this is the actual training data that is given to you and this is the prediction that your line is making at this point so this quantity is essentially the, the prediction error that this line is making and so what you do is you try to find that line that has the least prediction error right so you take the square of the errors that your prediction is making and then you try to minimize the, the sum of the squares of the errors why do we take the squares. (Refer Slide Time: 20:31) Because errors could be both positive or negative and we want to make sure that you are minimizing that regardless of the sign of the error okay and so with sufficient data right so a linear regression is simple enough you could just already using matrix inversions as we will see later but with many dimensions like the challenge is to avoid over fitting like we talked about earlier and then there are many ways of avoiding this. And so I will again talk about this in detail when we look at linear regression right so one point that I want to make is that linear regression is not as simple as it sounds right so here is an example so I have two input variables x1 and x2 right and if I try to fit a straight line with x1 and x2 I will probably end up with something like a1 x1 plus a2 x2 right and that looks like, like a plane in two dimensions right. But then if I just take these two dimensions and then transform them transform the input so instead of saying just the x1 and x2 if I say my input is going to look like x1 square x2 squared x1 x2 and then the x1 and x2 s it was in the beginning so instead of looking at a two-dimensional input if I am going to look at a 5 dimensional input right. So that will and out now I am going to fit a line or a linear plane in this 5 dimensional input so that will be like a1 x1 squared plus a2 x2 square plus a3 x1 x2 plus a4 x1 plus a5 x2 now that is no longer the equation of a line in two dimensions right so that is the equation of a second-order polynomial in two dimensions but I can still think of this as doing linear regression because I am only fitting a function that is going to be linear in the input variables right so by choosing an appropriate transformation of the inputs. (Refer Slide Time: 22:38) I can fit any higher-order function so I could solve very complex problems using linear regression and so it is not really a weak method as you would think at first, first glance again we will look at this in slightly more detail in the later lectures right and regression our prediction can be applied in a variety of places one popular places in time series prediction you could think about predicting rainfall in a certain region or how much you are going to spend on your telephone calls you could think of doing even classification using this. If you think of you remember our encoding of plus 1 and minus 1 for the class labels so you could think of plus 1 and minus 1 as the outputs right and then you can fit a regression line regression curve to that and if the output is greater than 0 you would say this classis plus 1 its output is less than 0 you see the class is minus 1 so it could use the regression ideas to fitness will solve the classification problem and you could also do data addiction. So I really do not want to you know give you all the millions of data points that I have in my data set but what I would do is essentially fit the curve to that and then give you just the coefficients of the curve right. And more often than not that is sufficient for us to get a sense of the data and that brings us to the next application I have listed their which is trend analysis so I am not really interested in quite many times. I am not interested in the actual values of the data but more in the, the trends so for example I have a solution that I am trying to measure the running times off and I am not really interested in the actual running time because with 37seconds to 38 seconds is not going to tell me much. But I would really like to know if the running time scales linearly or exponentially with the size of the important all right so those kinds of analysis again can be done using regression and in the last one here is again risk factor analysis like we had in classification and you can look at which are the factors that contribute most to the output so that brings us to the end of this module on supervised learning,,Hello and welcome to this module on introduction to unsupervised learning, right. So in supervised learning we looked at how you will handle training data that had labels on it. (Refer Slide Time: 00:26) So this is this particular place this is a classification data set where red denotes one class and blue denotes the other class right. (Refer Slide Time: 00:35) And in unsupervised learning right so you basically have a lot of data that is given to you but they do not have any labels attached to them right so we look at first at the problem of clustering where your goal is to find groups of coherent or cohesive data points in this input space right so here is an example of possible clusters. (Refer Slide Time: 00:57) So those set of data points could form a cluster right and again now those set of data points could form a cluster and again those and those so there are like four clusters that we have identified in this in this setup so one thing to note here is that even in something like clustering so I need to have some form of a bias right so in this case the bias that I am having is in the shape of the cluster so I am assuming that the clusters are all ellipsoids right and therefore you know I have been drawing a specific shape curves for representing the clusters. And also note that not all data points need to fall into clusters and there are a couple of points there that do not fall into any of the clusters this is primarily a artifact of me assuming that they are ellipsoids but still there are other points in the center is actually faraway from all the other points in the in the data set to be considered as what are known as outliers so when you do clustering so there are two things so one is you are interested in finding cohesive groups of points and the second is you are also interested in finding data points that do not conform to the patterns in the input and these are known as outliers all right. (Refer Slide Time: 02:23) And that is as many mean different ways of an which you can accomplish clustering and we will look at a few in the course and the applications are numerous right so here are a few representative ones so one thing is to look at customer data right and try to discover the classes of customers you know there are so earlier we looked at in the supervised learning case we looked at is that a customer will buy a computer or will not buy a computer as opposed to that we could just take all the customer data that you have and try to just group them into different kinds of customers who come to your shop and then you could do some kind of targeted promotions and different classes of customers right. And this need not necessarily come with labels you know I am not going to tell you that okay this customer is class 1 that customer is class 2 you are just going to find out which of the customers are more similar with each other all right. And as the second application which you have illustrated here is that I could do clustering on image pixels so that you could discover different regions in the image and then you could do some segmentation based on that different region so for example here it have a picture of a picture of a beach scene and then you are able to figure out the clouds and the sand and the sea and the tree from the image so that allows you to make more sense out of the image right. Or you could do clustering on world usages right and you could discover synonyms and you could also do clustering on documents right and depending on which kind of documents are similar to each other and if I give you a collection of say 100,000 documents I might be able to figure out what are the different topics that are discussed in this collection of documents and many ways in which you can use clustering rule mining. (Refer Slide Time: 04:17) And as I should give you a site about the usage of the word mining here so many of you might have heard of the term data mining and more often than not the purported data mining tasks are essentially machine learning problems right so it could be classification regression and so on so forth and the first problem that was essentially introduced as a mining problem and not as a learning problem was the one of mining frequent patterns and associations and that is one of the reasons I call this Association rule mining as opposed to Association rule learning just to keep the historic connection intact right, so in Association rule mining we are interested in finding frequent patterns that occur in the input data and then we are looking at conditional dependencies among these patterns right. And so for example if A and B occur together often right then I could say something like if A happens then B will happen let us suppose that so you have customers that are coming to your shop and whenever customer A visits your shop custom B also tags along with him right, so the next time you find customary A somewhere in the shop so you can know that customer B is already there in the shop along with A. Or with very high confidence you could say that B is also in the shop at some somewhere else maybe not with A but somewhere else in the shop all right, so these are the kinds of rules that we are looking at Association rules which are conditional dependencies if A has come then B is also there right and so the Association rule mining process usually goes in two stages so the first thing is we find all frequent patterns. So A happens often so A is a customer that comes to measure the store often right and then I find that A and B are paths of customers that come to my store often so if I once I have that right A comes to my store often an A and B comes to my store often then I can derive associations from this kind this frequent patterns right and also you could do this in the variety of different settings you could find sequences in time series data right and where you could look at triggers for certain events. Or you could look at fault analysis right by looking at a sequence of events that happened and you can figure out which event occurs more often with the fault right or you could look at transactions data which is the most popular example given here is what is called Market Basket data so you go to a shop and you buy a bunch of things together and you put them in your basket so what is there in your basket right so this forms the transaction so you buy say eggs, milk and bread and so all of this go together in your basket. And then you can find out what are the frequently occurring patterns in this purchase data and then you can make rules out of those or you could look at finding patterns and graphs that is typically used in social network analysis so which kind of interactions among entities happen often right so that is a that is another question that is what we looking at right. (Refer Slide Time: 07:31) So the most popular thing here is mining transactions so the most popular application here is mining transactions and as I mentioned earlier transaction is a collection of items that are bought together right and so here is a little bit of terminology and it is a set or a subset of items is often called an item set in the Association rule mining community and so the first step that you have to do is find frequent item sets right. And you can conclude that item set A if it is frequent implies item set B if both A and AUB or frequent item sets right so A and B are subset so AUB is another subset so if both A and AUB or frequent item sets then you can say that item set A implies item set B right and like I mentioned earlier so there are many applications here so you could think of predicting co-occurrence of events. (Refer Slide Time: 08:31) And Market Basket analysis and type series analysis like I mentioned earlier you could think of trigger events or false causes of False and so on so forth right so this brings us to the end of this module introducing unsupervised learning.it is a 30 day refund period!! So you have nothing to lose!!"
Price: 199.99

"Docker Essentials for Python Developers"
"Docker & Containers are Foundations of modern DevOps practices. These are must-have skills of Full Stack Developers.Containers have become a standard in Production-grade Deep Learning applications.Every Python Developer must be fluent and comfortable in using Containers at every step of Application lifecycle.You learn Docker and Containers quickly in this Course.It is designed to be very practical and give you Quick Win without spending too much time. I use Minimal Manual teaching approach: each idea, concept or skill has a dedicated Lecture. This way you are going to learn much faster.Here you learn everything important to prove you know Containers:How to build and run Containers with Python AppsContainerize Flask-based Microservices and Django Web AppsUse Docker in Data Science and Machine Learning EnvironmentsCreate complex Development & Test Environments with Docker ComposeYou are going to get practical results in first hour of using this Course!Don't wait any longer, start using Containers now!Course Introduction section is free to watch, as well as first Lectures of each Section. Check them out!"
Price: 99.99