upload your
ASSIGNMENT
E-mail: learn@coachoncouch.com

Refer to the following frequency distribution for Questions 1, 2,3, and 4.

 

The frequency distribution below shows the distribution for suspended solid concentration(in ppm) in river water of 50 different waters collected in September 2011.            

 

Concentration (ppm)

Frequency

20 - 29

1

30 - 39

8

40 - 49

8

50 - 59

10

60 - 69

12

70 - 79

7

80 - 89

2

90 - 99

2

 

 

1.         What percentage of the rivershad suspended solid concentration greater thanor equal to 70?                                                                                                                        

2.         Calculate the mean of this frequency distribution. 

3.         In what class interval must the median lie?  Explain your answer.(You don’t have to find the median)                                                                     

4.         Assume that the smallest observation in this dataset is 20.  Suppose this observation were incorrectly recorded as 2 instead of 20.  Will the mean increase, decrease, or remain the same?  Will the median increase, decrease or remain the same?  Explain your answers.

 


Refer to the following information for Questions 5 and 6.

 

A coin is tossed 4 times.  Let A be the event that the first toss is heads.  Let B be the event that the third toss is heads.

 

5.         What is the probability that the third toss is heads, given that the first toss is heads?                                      

6.         Are A and B independent?  Why or why not?                                                                         


 


Refer to the following data to answer questions 7 and 8.  Show all work. Just the answer, without supporting work, will receive no credit.

 

A random sample of song playing times in seconds is as follows:

 

242   231   220   213    230    293

 

7.         Find the standard deviation.                                                                            

8.         Are any of these playing times considered unusual in the sense of our textbook?  Explain.  Does this differ with your intuition?  Explain.                                                         


Refer to the following situation for Questions 9, 10, and 11.

 

The boxplots below show the real estate values of single family homes in two neighboring cities, in thousands of dollars.

 

image501

For each question, give your answer as one of the following:  (a) Tinytown; (b) BigBurg; (c) Both cities have the same value requested; (d) It is impossible to tell using only the given information.Then explain your answer in each case.                                                                 

 

9.         Which city has greater variability in real estate values?

10.       Which city has the greater percentage of households with values $85,000 and over?

11.       Which city has a greater percentage of homes with real estate values between $55,000 and $85,000?


12.       Arandom sample of the lifetime of 49UltraIllumlight bulbs has a mean of 3,960 hours and a standard deviation of200 hours.Construct a 95% confidence interval estimate of the mean lifetimeforall UltraIllum light bulbs.                                                                                  


 


Refer to the following information for Questions 13and 14.

 

There are 500 students in the senior class at a certain high school.  The high school offers two Advanced Placement math / stat classes to seniors only: AP Calculus and AP Statistics.  The roster of the Calculus class shows 95 people; the roster of the Statistics class shows 86people.  There are 43 overachieving seniors on both rosters. 

 

13.       What is the probability that a randomly selected senior is in exactly one of the two classes (but not both)?                                                                                                   

14.       If the student is in the Statistics class, what is the probability the student is also in the Calculus class?                                                                                                          


Refer to the following information for Questions 15, 16, and17.

 

A box contains 10 chips.  The chips are numbered 1 through 10.  Otherwise, the chips are identical.  From this box, we draw one chip at random, and record its value.  We then put the chip back in the box.  We repeat this process two more times, making three draws in all from this box.

 

15.       How many elements are in the sample space of this experiment?                                           

16.       What is the probability that the three numbers drawn are all different?                                  

17.       What is the probability that the three numbers drawn are all even numbers?                          


Questions 18 and 19 involve the random variable x with probability distribution given below.

2

3

5

8

10

0.1

0.3

0.4

0.1

0.1

 

18.       Determine the expected value of x.                                                                

19.       Determine the standard deviation of x.                                                                       


Consider the following situation for Questions 20 and 21.

 

Airline overbooking is a common practice.  Due to uncertain plans, many people cancel at the last minute or simply fail to show up.  Air Eagle is a small commuter airline.  Its past records indicate that 80% of the people who make a reservation will show up for the flight.  The other 20% do not show up.  Air Eagle decided to book 12 people for today’s flight.  Today’s flight has just 10 seats. 

 

20.       Find the probability that there are enough seats for all the passengers who show up. (Hint:  Find the probability that in 12 people, 10 or less show up.)                                              

21.       How many passengers are expected to show up?                                                       

 


 

 

 


22.       Given a sample size of 65, with sample mean 726.2 and sample standard deviation 85.3, we perform the following hypothesis test.

           

What is the conclusion of the test at the level?  Explain your answer.           


 

Refer to the following information for Questions 23, 24, and 25.                                          

 

The BestEvercredit scores are normally distributed with a mean of 600 and a standard deviation of 100.

 

23.       What is the probability that a randomly person has a BestEvercredit score between 500 and 700?    

24.       Find the 90thpercentile of the BestEvercredit score distribution.                                             

25.       If a random sample of 100 people is selected, what is the standard deviation of the sample meanBestEver credit scores?                                                                                                           


26.       Consider the hypothesis test given by                                                 

           

In a random sample of 81 subjects, the sample mean is found to be Also, the population standard deviation is                              

 

Determine the P-value for this test.  Is there sufficient evidence to justify the rejection of  at the  level? Explain.                                                            


27.       A certain researcher thinks that the proportion of women who say that female bosses are harshly critical is greater than the proportion of men.

 

In a random sample of 200 women, 27% said that female bosses are harshly critical.

In a random sample of 220 men, 25% said that female bosses are harshly critical.

           

At the 0.05 significance level, is there sufficient evidence to support the claim that the proportion of women saying female bosses are harshly critical is higher than the proportion of men saying female bosses are harshly critical?Show all work and justify your answer.                                                                                                          


28.       Randomly selected nonfatal occupational injuries and illnesses are categorized according to the day of the week that they first occurred, and the results are listed below. Use a 0.05 significance level to test the claim that such injuries and illnesses occur with equal frequency on the different days of the week.Show all work and justify your answer.                                                                                                                           

                                   

Day

Mon

Tue

Wed

Thu

Fri

Number

23

23

21

20

18


 

 

 


Refer to the following data for Questions 29 and 30.

:

x

0

– 1

1

1

2

y

2

– 2

5

4

6

 

 

29.       Is there a linear correlation between x and y at the 0.01 significance level? Justify your answer.                                                                                                                   

30.       Find an equation of the least squares regression line.  Show all work; writing the correct equation, without supporting work, will receive no credit.                                  

______________________________________________________________________________

 

 

 

Views (313)
Body Preview(5641 words)
  1.  

Cumulative Frequency xxxxxxx Table:

 

Frequency

1

1

30 - xxxxxxx nowrap="nowrap" style="width:111px;height:20px;">

9

40 xxxxxxx 49

8

10

27

60 - xxxxxxx nowrap="nowrap" style="width:111px;height:20px;">

39

70 xxxxxxx 79

7

2

45

90 - xxxxxxx nowrap="nowrap" style="width:111px;height:20px;">

47

 

(47-39)= 17.02%

 

  1. The xxxxxxx frequency of xxxxxxx above table xxxxxxx as follows:

 

Frequency xxxxxxx style="width:95px;height:20px;">

fx

C.F

20 - xxxxxxx style="width:86px;height:20px;">

19.5-29.5

24.5

1

30 xxxxxxx 39

34.5

8

276

39.5-49.5

356

17

50 xxxxxxx xxxxxxx style="width:86px;height:20px;">

49.5-59.5

54.5

10

27

60 xxxxxxx 69

64.5

12

774

69.5-79.5

521.5

46

80 - xxxxxxx xxxxxxx align="center">79.5-89.5

84.5

2

90 xxxxxxx 99

94.5

2

189

 

 

 

57.08

 

 

where, xxxxxxx = lower xxxxxxx xxxxxxx the xxxxxxx in which xxxxxxx median lies,

h= xxxxxxx interval xxxxxxx xxxxxxx the modal xxxxxxx Frequency of xxxxxxx class interval xxxxxxx which the xxxxxxx lies,

cf= Cumulative xxxxxxx of the xxxxxxx xxxxxxx Cumulative xxxxxxx of the xxxxxxx median of xxxxxxx above xxxxxxx xxxxxxx (N/2)thitem = xxxxxxx = 25th xxxxxxx median = xxxxxxx value="4">The mean xxxxxxx increase as xxxxxxx increase in xxxxxxx xxxxxxx is xxxxxxx more than xxxxxxx of the xxxxxxx in xxxxxxx xxxxxxx for mean xxxxxxx a grouped xxxxxxx Since the xxxxxxx is having xxxxxxx multiplier effect, xxxxxxx will increase xxxxxxx xxxxxxx times, xxxxxxx the denominator xxxxxxx only increase xxxxxxx 18 xxxxxxx xxxxxxx times.

The median xxxxxxx also increase xxxxxxx the median xxxxxxx a grouped xxxxxxx is a xxxxxxx comprising of xxxxxxx xxxxxxx and xxxxxxx is positively xxxxxxx to the xxxxxxx So xxxxxxx xxxxxxx Cumulative frequency xxxxxxx the median xxxxxxx increases.

 

  1. A xxxxxxx event that xxxxxxx first toss xxxxxxx heads

B = xxxxxxx xxxxxxx the xxxxxxx toss is xxxxxxx of getting xxxxxxx head xxxxxxx xxxxxxx toss of xxxxxxx = ½

Probability xxxxxxx A = xxxxxxx of B= xxxxxxx Probability = xxxxxxx = ¼

 

 

 

  1. Data Given:

 

    •  
  • 2
      1.  
    1.  
    1.  
     
    1.  
    1.  
    1.  
    1.  
    1.  
      1.  
    1.  
    1.  
     
    1.  
    1.  
    1.  

     

     

     

     = (242+231+220+213+230+293)/6 xxxxxxx 1429/6 = xxxxxxx xxxxxxx  = xxxxxxx 22.38

     

    1. The xxxxxxx time of xxxxxxx is xxxxxxx xxxxxxx as it xxxxxxx very less xxxxxxx compared to xxxxxxx time.
    2. The xxxxxxx of Bigburg xxxxxxx greater variability xxxxxxx xxxxxxx estate xxxxxxx as the xxxxxxx have a xxxxxxx range xxxxxxx xxxxxxx - $110,000. xxxxxxx wider interval xxxxxxx greater variability.
    3. It xxxxxxx impossible to xxxxxxx with the xxxxxxx xxxxxxx that xxxxxxx city has xxxxxxx number of xxxxxxx in xxxxxxx xxxxxxx - $85,000. xxxxxxx is because, xxxxxxx boxplots shows xxxxxxx estate values xxxxxxx not the xxxxxxx of homes.

     

     

     

     

     

     

     

    Standard xxxxxxx (s) = xxxxxxx (given)

    µ) xxxxxxx 95% level xxxxxxx given by xxxxxxx formula:

               

     

    where,  xxxxxxx xxxxxxx 1- xxxxxxx Level/100) = xxxxxxx = 0.05 xxxxxxx this xxxxxxx xxxxxxx  = Value xxxxxxx from t-distribution xxxxxxx for the xxxxxxx dfand α/2,

    Degree of xxxxxxx (df) = xxxxxxx xxxxxxx (n) xxxxxxx 1 = xxxxxxx – 1 xxxxxxx 48 xxxxxxx xxxxxxx sum.

     

     

    For xxxxxxx interval we xxxxxxx the values xxxxxxx above in xxxxxxx xxxxxxx style="margin-left:35.45pt;"> 

                            xxxxxxx style="margin-left:35.45pt;">            =         

    The xxxxxxx xxxxxxx for 95% xxxxxxx level is xxxxxxx 3968.21)

     

    1. Total xxxxxxx Statistic Class xxxxxxx 86

      Common in xxxxxxx = xxxxxxx xxxxxxx take only xxxxxxx = 95-43 xxxxxxx 52

      Students who xxxxxxx only Statistic xxxxxxx 86-43 = xxxxxxx A be xxxxxxx xxxxxxx of xxxxxxx a senior.

      Then xxxxxxx probability that xxxxxxx senior xxxxxxx xxxxxxx exactly in xxxxxxx one class xxxxxxx (52+43)/500 = xxxxxxx = 19%.

       

      Then, xxxxxxx probability that xxxxxxx student of xxxxxxx statistic xxxxxxx xxxxxxx also in xxxxxxx class

      =Common xxxxxxx taking statistics xxxxxxx 43/86= 0.5

      1. Nos. xxxxxxx trials xxxxxxx xxxxxxx sample space xxxxxxx 103 = xxxxxxx value="16">The probability xxxxxxx drawing one xxxxxxx from the xxxxxxx = 1

      The xxxxxxx xxxxxxx should xxxxxxx different,

      Now, the xxxxxxx of drawing xxxxxxx 2nd xxxxxxx xxxxxxx 9/10

      The third xxxxxxx probability = xxxxxxx total probability xxxxxxx 1 x xxxxxxx x8/10 = xxxxxxx = 0.72

      There xxxxxxx three trials, xxxxxxx every time xxxxxxx probability of xxxxxxx a even xxxxxxx xxxxxxx constant.

      Hence, xxxxxxx probability of xxxxxxx all three xxxxxxx numbers xxxxxxx xxxxxxx = 0.125

       

      1.  

    2

    5

    8

     

    0.1

    0.4

    0.1

    x.

    0.9

    2.0

    1.0

     

    The xxxxxxx value of xxxxxxx = 0.2+0.9+2.0+0.8+1.0 xxxxxxx xxxxxxx value="19"> 

     

    5

    10

     

    0.3

    0.4

    0.1

    0.2

    0.9

    0.8

    1.0

     

     

     

    2*0.1 xxxxxxx (3-4.9)2*0.3 + xxxxxxx + (8-4.9)2*0.1 xxxxxxx (10-4.9)2*0.1]

                   xxxxxxx 0.841+1.083+1.444+0.961+2.601 = xxxxxxx xxxxxxx style="margin-left:35.45pt;">S.D xxxxxxx  2.63

     

    People who xxxxxxx not show xxxxxxx i.e. q xxxxxxx 20% = xxxxxxx bookings of xxxxxxx xxxxxxx = xxxxxxx seat in xxxxxxx flight for xxxxxxx day xxxxxxx xxxxxxx passangers who xxxxxxx up = xxxxxxx * 80% xxxxxxx 9.6 (say xxxxxxx or less)

    Hence, xxxxxxx probability of xxxxxxx xxxxxxx given xxxxxxx the formula:

     

    • Binomcdf xxxxxxx Size, Success xxxxxxx Trials)

     

    Hence xxxxxxx xxxxxxx = Binomcdf(12,0.8,10) xxxxxxx 0.725

     

     

    1. Nos xxxxxxx passangers expected xxxxxxx show up xxxxxxx 12*0.8 = xxxxxxx value="22">We use xxxxxxx xxxxxxx E(y) xxxxxxx 726.2 and xxxxxxx = 85.3

     

    z xxxxxxx  = xxxxxxx xxxxxxx 2.249

     

    • = 0.10, xxxxxxx Critical z xxxxxxx 0.10 level xxxxxxx 2.000, using xxxxxxx with,

    Degree of xxxxxxx = 65-1=64

    As, xxxxxxx xxxxxxx (2.249) xxxxxxx Critical z xxxxxxx hence we xxxxxxx that xxxxxxx xxxxxxx hypothesis can xxxxxxx rejected.

     

     

     

    1.  
    2. = xxxxxxx = 1

    Taking xxxxxxx from the xxxxxxx xxxxxxx table, xxxxxxx get Z1= xxxxxxx & Z2 xxxxxxx 0.34134

    Hence xxxxxxx xxxxxxx that a xxxxxxx person has xxxxxxx BestEver credit xxxxxxx between 500 xxxxxxx 700 = xxxxxxx 0.34134 +0.34134 xxxxxxx xxxxxxx value="24">From xxxxxxx z score xxxxxxx we get xxxxxxx the xxxxxxx xxxxxxx for the xxxxxxx percentile is xxxxxxx z = xxxxxxx where x= xxxxxxx of sample xxxxxxx 1.28 = xxxxxxx xxxxxxx value="25">S.D xxxxxxx sample mean xxxxxxx Normal S.D/ xxxxxxx where xxxxxxx xxxxxxx sample observations

    Therefore, xxxxxxx of Sample xxxxxxx = 100/ xxxxxxx 10

     

    1. Sample xxxxxxx = 524

    S.D xxxxxxx 27

    1.  

      In xxxxxxx left tailed xxxxxxx the xxxxxxx xxxxxxx the area xxxxxxx the left xxxxxxx the test xxxxxxx (z= -2.00). xxxxxxx the z xxxxxxx table, the xxxxxxx xxxxxxx the xxxxxxx of -2.00 xxxxxxx 0.228. The xxxxxxx 0.228 xxxxxxx xxxxxxx than 0.01, xxxxxxx null hypothesis xxxxxxx not rejected.

       

      1.  

      2.  
      3.   Female Bosses xxxxxxx harshly critical

        a= xxxxxxx xxxxxxx align="center">b= 54

         

         

         

        The claim xxxxxxx the proportion xxxxxxx women saying xxxxxxx xxxxxxx are xxxxxxx critical is xxxxxxx than the xxxxxxx of xxxxxxx xxxxxxx female bosses xxxxxxx harshly critical xxxxxxx be expressed xxxxxxx p2

         

        1< xxxxxxx then p1≥  xxxxxxx xxxxxxx style="margin-left:54.0pt;">Because xxxxxxx the claim xxxxxxx p2 does xxxxxxx contain xxxxxxx xxxxxxx becomes the xxxxxxx hypothesis. The xxxxxxx hypothesis is xxxxxxx statement of xxxxxxx so we xxxxxxx style="list-style-type:upper-alpha;">

      4.  

       

       

      • ,  0.7405

      The test xxxxxxx (z) = xxxxxxx xxxxxxx (55/220)-(54/200) xxxxxxx -0.467

                       

      This being xxxxxxx left xxxxxxx xxxxxxx the P-value xxxxxxx the area xxxxxxx the left xxxxxxx the test xxxxxxx style="margin-left:54.0pt;">(z= -0.467). xxxxxxx the z xxxxxxx xxxxxxx the xxxxxxx to the xxxxxxx of -0.467 xxxxxxx 0.3228. xxxxxxx xxxxxxx 0.3228 is xxxxxxx than the xxxxxxx significance value xxxxxxx the null xxxxxxx of p1 xxxxxxx p2 is xxxxxxx xxxxxxx style="margin-left:54.0pt;">We, xxxxxxx that the xxxxxxx of women xxxxxxx female xxxxxxx xxxxxxx harshly critical xxxxxxx higher than xxxxxxx proportion of xxxxxxx saying female xxxxxxx are harshly xxxxxxx As we xxxxxxx xxxxxxx null xxxxxxx our claim xxxxxxx true and xxxxxxx proved.

      From the xxxxxxx data k= xxxxxxx = 5-1 xxxxxxx 4

    E = xxxxxxx = 21

       

    For xxxxxxx xxxxxxx obs.: xxxxxxx (23-21)2/21 = xxxxxxx style="margin-left:35.45pt;">For the xxxxxxx obs.: xxxxxxx xxxxxxx 2/21 = xxxxxxx style="margin-left:35.45pt;">For the xxxxxxx obs.: x2= xxxxxxx 2/21 = xxxxxxx style="margin-left:35.45pt;">For the xxxxxxx obs.: x2= xxxxxxx xxxxxxx = xxxxxxx style="margin-left:35.45pt;">For the xxxxxxx obs.: x2= xxxxxxx 2/21 xxxxxxx xxxxxxx style="margin-left:35.45pt;"> 

    Total xxxxxxx X2= 0.857

     

    Given xxxxxxx xxxxxxx = xxxxxxx style="margin-left:35.45pt;">As the xxxxxxx level of xxxxxxx is xxxxxxx xxxxxxx the significance xxxxxxx hence the xxxxxxx that such xxxxxxx and illnesses xxxxxxx with equal xxxxxxx on the xxxxxxx xxxxxxx of xxxxxxx week is xxxxxxx for most xxxxxxx the xxxxxxx xxxxxxx value="29">& 30. xxxxxxx given:

     

     

     

     

    Sums

    0

    – xxxxxxx xxxxxxx align="center">1

    2

    3

    2

    – 2

    5

    6

    15

    0

    1

    1

    4

    y2

    4

    25

    16

    85

    xy

    2

    5

    12

    23

    )2

    2.56

    0.16

    1.96

    )2

    25

    4

    9

    40

     

    8

    0.8

    4.2

     

     

     

     

    Corelation xxxxxxx (r) xxxxxxx xxxxxxx style="margin-left:35.45pt;">                                     xxxxxxx 14/(5.2 * xxxxxxx = 0.9707

    Slope of xxxxxxx regression line xxxxxxx = xxxxxxx xxxxxxx = xxxxxxx style="margin-left:35.45pt;"> 

    Intercept xxxxxxx the regression xxxxxxx (a) xxxxxxx xxxxxxx  = (15/5)-2.6923(3/5) xxxxxxx style="margin-left:35.45pt;"> 

    The xxxxxxx equation is xxxxxxx = a+bx xxxxxxx 1.38462+ 2.6923x

    Predicted y’, xxxxxxx xxxxxxx y’= xxxxxxx = 1.38462+ xxxxxxx 4.07692

      •          xxxxxxx xxxxxxx     -1                 xxxxxxx 1                      xxxxxxx      2    
      •          xxxxxxx             -1.3077            xxxxxxx             4.0769             xxxxxxx 18.5562                       1.1598             xxxxxxx xxxxxxx       xxxxxxx value=""> 

       

      SST = xxxxxxx =  = xxxxxxx = xxxxxxx xxxxxxx (Total) = xxxxxxx = 5-1 xxxxxxx 4

       

      Df (R) xxxxxxx 1

       

      Df (E) xxxxxxx Df (Total)- xxxxxxx = 4-1 xxxxxxx xxxxxxx = xxxxxxx = 37.6923/1 xxxxxxx 37.6923

       

      MSE = xxxxxxx = xxxxxxx xxxxxxx 0.76923

       

      F = xxxxxxx = 37.6923/0.76923 xxxxxxx 49.00

       

      α = xxxxxxx F score xxxxxxx 34.1162

       

      p-value = xxxxxxx name="_GoBack"> 

      The p-value xxxxxxx xxxxxxx is xxxxxxx The calculated xxxxxxx is less xxxxxxx the xxxxxxx xxxxxxx significance of xxxxxxx hence there xxxxxxx a significant xxxxxxx between x xxxxxxx y.

      Price : $5.00 Buy Now
       
      Newsletter Sign Up Put your email address below...
      Contact to : learn@coachoncouch.com
      Copyright © 2014 by CoachOnCouch