Detailansicht

Data analysis

Standard deviation, Variance, Data mining, Statistical assumption, Principal component analysis, Outlier, Experimental uncertainty analysis, Cluster analysis, German tank problem, Window function, Bootstrapping, Text mining
ISBN/EAN: 9781157488392
Umbreit-Nr.: 5693601

Sprache: Englisch
Umfang: 118 S.
Format in cm: 0.7 x 24.6 x 18.9
Einband: kartoniertes Buch

Erschienen am 07.10.2013
Auflage: 1/2013
€ 28,47
(inklusive MwSt.)
Lieferbar innerhalb 1 - 2 Wochen
  • Zusatztext
    • Source: Wikipedia. Pages: 118. Chapters: Standard deviation, Variance, Data mining, Statistical assumption, Principal component analysis, Outlier, Experimental uncertainty analysis, Cluster analysis, German tank problem, Window function, Bootstrapping, Text mining, Algorithmic inference, Collocation, Cumulative frequency analysis, Data visualization, Data transformation, Independent component analysis, Covariance matrix, Forecasting, Contingency table, Text analytics, Bootstrapping populations, Clustering high-dimensional data, Exponential smoothing, Photoanalysis, Item tree analysis, Missing data, Probit, Boolean analysis, Post-hoc analysis, Power transform, Segmented regression, Standard score, K-medoids, Exploratory data analysis, 1.96, Empirical distribution function, TinkerPlots, ANOVA-simultaneous component analysis, Neighbourhood components analysis, Index of dispersion, Cluster-weighted modeling, Explained variation, Correlation clustering, Overdispersion, Multitrait-multimethod matrix, Anscombe transform, Educational data mining, Topological data analysis, 68-95-99.7 rule, Local convex hull, Univariate analysis, Counternull, Multiple correspondence analysis, Lincoln index, Visual inspection, Evolutionary data mining, Stationary subspace analysis, Grouped data, Political forecasting, Imputation, Silhouette, Inverse Mills ratio, Natural Language Toolkit, Health care analytics, Data classification, LISREL, Barnard's test, Functional data analysis, Limited dependent variable, Visual comparison, Training set, Shape of the distribution, Test set, Data reduction, Variance-stabilizing transformation, Structured data analysis, Wide and narrow data, Normal score, Proxy, Cross tabulation, Quantile normalization, Double mass analysis, Geometric data analysis, Report mining, Grand mean, Data Discovery and Query Builder, Fathom: Dynamic Data Software, Combinatorial data analysis, Self-modeling mixture analysis, Multiscale geometric analysis, Barnardisation, Inverse-variance weighting, Reification, Subgroup analysis, Oversampling and undersampling in data analysis, Principal geodesic analysis, Standardized rate. Excerpt: The purpose of this introductory article is to discuss the experimental uncertainty analysis of a derived quantity, based on the uncertainties in the experimentally measured quantities that are used in some form of mathematical relationship ("model") to calculate that derived quantity. The model used to convert the measurements into the derived quantity is usually based on fundamental principles of a science or engineering discipline. The uncertainty has two components, namely, bias (related to accuracy) and the unavoidable random variation that occurs when making repeated measurements (related to precision). The measured quantities may have biases, and they certainly have random variation, so that what needs to be addressed is how these are "propagated" into the uncertainty of the derived quantity. Uncertainty analysis is often called the "propagation of error." It will be seen that this is a difficult and in fact sometimes intractable problem when handled in detail. Fortunately, approximate solutions are available that provide very useful results, and these approximations will be discussed in the context of a practical experimental example. Rather than providing a dry collection of equations, this article will focus on the experimental uncertainty analysis of an undergraduate physics lab experiment in which a pendulum is us.