代做AcF 633 - Python Programming for Data Analysis代做Python编程
- 首页 >> Java编程AcF 633 - Python Programming for Data Analysis
Final Individual Project
20th March 2025 noon/12pm to 10th April 2025 noon/12pm (UK time)
This assignment contains one question worth 100 marks and constitutes 55% of the total marks for this course.
You are required to submit to Moodle a SINGLE .zip folder containing a SINGLE Jupyter Notebook .ipynb file OR Python script .py file, together with any supporting .csv files (e.g. input data files. However, do NOT include the
‘GOOG 202001.csv.gz’ data file as it is large and may slow down the upload and sub- mission) AND a signed coursework coversheet. The name of this folder should be your student ID or library card number (e.g. 12345678.zip, where 12345678 is your student ID).
In your main script, either Jupyter Notebook .ipynb file or Python .py file, you do not have to retype the question for each task. However, you must clearly label which task (e.g. 1.1, 1.2, etc) your subsequent code is related to, either by using a markdown cell (for .ipynb file) or by using the comments (e.g. #1 .1 or ‘‘‘1 .1’’’ for .py file). Provide only ONE answer to each task. If you have more than one method to answer a task, choose one that you think is best and most efficient. If multiple answers are provided for a task, only the first answer will be marked.
Your submission .zip folder MUST be submitted electronically via Moodle by the 10th April 2025 noon/12pm (UK time). Email submissions will NOT be considered. If you have any issues with uploading and submitting your work to Moodle, please email Carole Holroyd at [email protected] BEFORE the deadline for assistance with your submission.
This assignment is AI Assessment AMBER (i.e. Generative AI tools can be used in an assistive role). Please refer to the University position page: University position on Artificial Intelligence for more details about AI Assessment RAG categories. If you use AI to assist your work, you are required to submit an AI appendix.
The following penalties will be applied to all coursework that is submitted after the specified submission date:
Up to 3 days late - deduction of 10 marks
Beyond 3 days late - no marks awarded Good Luck!
Question 1:
Task 1: High-frequency Finance (Σ = 35 marks)
The data file ‘GOOG 202001.csv.gz’ contains the tick-by-tick transaction data for
stock GOOG in January 2020, with the following information:
Fields |
Definitions |
DATE TIME M SYM ROOT EX SIZE PRICE NBO NBB NBOqty NBBqty BuySell |
Date of transaction Time of transaction (seconds since mid-night) Security symbol root Exchange where the transaction was executed Transaction size Transaction price Ask price (National Best Offer) Bid price (National Best Bid) Ask size Bid size Buy/Sell indicator (1 for buys, -1 for sells) |
Import the data file into Python and perform the following tasks:
1.1: Write code to perform the filtering steps below in the following order: (15 marks) F1: Remove entries with either transaction price, transaction size, ask price, ask
size, bid price or bid size ≤ 0
F2: Remove entries with bid-ask spread (i.e. ask price - bid price) ≤ 0
F3: Aggregate entries that are (a) executed at the same date time (i.e. same
‘DATE’ and ‘TIME M’), (b) executed on the same exchange, and (c) of the
same buy/sell indicator, into a single transaction with the median transaction
price, median ask price, median bid price, sum transaction size, sum ask size
and sum bid size.
F4: Remove entries for which the bid-ask spread is more that 50 times the median
bid-ask spread on each day
F5: Remove entries with the transaction price that is either above the ask price
plus the bid-ask spread, or below the bid price minus the bid-ask spread
Create a data frame. called summary of the following format that shows the number and proportion of entries removed by each of the above filtering steps. The proportions (in %) are calculated as the number of entries removed divided by the original number of entries (before any filtering).
|
F1 |
F2 |
F3 |
F4 |
F5 |
Number |
|
|
|
|
|
Proportion |
|
|
|
|
|
Here, F1, F2, F3, F4 and F5 are the columns corresponding to the above 5 filtering rules, and Number and Proportion are the row indices of the data frame.
1.2: Using the cleaned data from Task 1.1, write code to compute Realized Volatil- ity (RV), Bipower Variation (BV) and Truncated Realized Volatility (TRV) mea- sures (defined in the lectures) for each trading day in the sample using different sampling frequencies including 1 second (1s), 2s, 3s, 4s, 5s, 10s, 15s, 20s, 30s, 40s, 50s, 1 minute (1min), 2min, 3min, 4min, 5min, 6min, 7min, 8min, 9min, 10min, 15min, 20min and 30min. The required outputs are 3 data frames RVdf, BVdf and
TRVdf (for Realized Volatility, Bipower Variation and Truncated Realized Volatil- ity respectively), each having columns being the above sampling frequencies and row index being the unique dates in the sample. (10 marks)
1.3: Use results in Task 1.2, write code to produce a 1-by-3 subplot figure that shows the ‘volatility signature plot’ for RV, BV and TRV. Scale (i.e. multiply) the RVs, BVs and TRVs by 104 when making the plots. Your figure should look similar to the following.
(5 marks)
1.4: Using a 5min sampling frequency and a 5% significance level, write code to conduct a jump test to test whether or not there are jumps in the prices of GOOG on each date in the sample. Your jump test should be based on the test statistic where Jt = max(RVt−BVt , 0) is an estimate of the jump variation on day t. If there are no jumps on day t, zt ∼ N(0, 1) (see the lecture slides on high-frequency finance for more details). Store the output in a data frame called jumpdf that has row indices being the unique dates in the sample and columns including ‘RV’, ‘BV’, ‘J’, ‘jump’, which respectively capture the RV, BV, jump variation, and whether there are jumps (i.e. ‘jump’ is ‘Yes’) or not (i.e. ‘jump’ is
‘No’) on each date. (5 marks)
Task 2: Return-Volatility Modelling (Σ = 20 marks) Refer back to the csv data file ‘SP100-Feb2023.csv’ that lists the constituents
of the S&P100 index as of February 2023 that was investigated in the Group
Project. Import the data file into Python.
Using your student ID or library card number (e.g. 12345678) as a random
seed, draw a random sample of 2 stocks (i.e. tickers) from the S&P100 index
excluding stocks ABBV, AVGO, CHTR, DOW, GM, KHC, META, PYPL and
TSLA. Import daily Adjusted Close (Adj Close) prices for both stocks between
01/01/2010 and 31/12/2024 from Yahoo Finance. Compute the log daily returns
(in %) for both stocks and drop days with NaN returns. Perform the following
tasks.
2.1: Using data between 01/01/2010 and 31/12/2021 as in-sample data, write code
to find the best-fitted AR(m)-GJR-GARCH(p, o, q) model with Student’s t errors
for the log returns of each stock that minimizes AIC, with m, q ∈ {0, 1, 2, 3},
p ∈ {1, 2, 3} and 1 ≤ o ≤ p. Print the best-fitted AR(m)-GJR-GARCH(p, o, q)
output for each stock and a statement similar to the following for your stock sample.
Best-fitted AR(m)-GJR-GARCH(p,o,q) model for GILD: AR(3)-GJR-GARCH(1,1,2) - AIC = 11310 .9499
Best-fitted AR(m)-GJR-GARCH(p,o,q) model for GOOG: AR(3)-GJR-GARCH(1,1,1) -
AIC = 10495 .7030 (7 marks)
2.2: Use the best-fitted AR(m)-GJR-GARCH(p, o, q) model in Task 2.1 to test for the presence of ‘leverage effects’ (i.e. asymmetric responses of the conditional variance to the positive and negative shocks) in the return series of each stock.
Draw and print your test conclusion using a 5% significance level. (5 marks)
2.3: Write code to plot a 2-by-5 subplot figure that includes the following diagnos-
tics for the best-fitted AR(m)-GJR-GARCH(p, o, q) model found in Task 2.1:
Row 1: (i) Time series plot of the standardized residuals, (ii) histogram of the
standardized residuals, fitted with a kernel density estimate and the density
of a fitted Student t distribution, (iii) ACF of the standardized residuals, (iv)
ACF of the squared standardized residuals, and (v) time series of the fitted conditional volatility.
Row 2: The same subplots for the second stock.
Your figure should look similar to the following for your sample of stocks. Com-
ment on what you observe from the plots. (8 marks)
Task 3: Return-Volatility Forecasting (Σ = 25 marks)
3.1: Using the remaining data from 01/01/2022 to 31/12/2024 as out-of-sample data, write code to produce one-step analytic forecasts, together with 95% confidence interval, for the returns and conditional volatility of each stock us- ing the respective best-fitted AR(m)-GJR-GARCH(p, o, q) model found in Task
2.1. Also produce the one-step return forecasts and 95% CI and the conditional volatility for a competing model AR(1)-GARCH(1,1) with Student’s t errors. For each stock and each model, the forecast output is a data frame with 4 columns f, fl and fu, and volf corresponding to the one-step return forecasts, 95% CI lower bounds and upper bounds for return forecasts, and one-step conditional volatility forecasts. (7 marks)
3.2: Use results in Task 2.3, write code to plot a 3-by-2 subplot figure showing:
Row 1: the one-step return forecasts against the true values during the out-of-sample period for both stocks in your sample, plus the 95% confidence interval of the return forecasts for (i) best-fitted AR(m)-GJR-GARCH(p, o, q) model found in Task 2.1 and (ii) the competing AR(1)-GARCH(1,1) model; and (iii) the one-step conditional volatility forecasts produced by the two competing models.
Row 2: The same subplots for the second stock.
3.3: Denote by et+h|t = yt+h − ybt+h|t the h-step forecast error at time t, which is the difference between the observed value yt+h and an h-step forecast yb t+h|t produced by a forecast model. Four popular metrics to quantify the accuracy of the forecasts in an out-of-sample period with T ′ observations are:
The closer the above measures are to zero, the more accurate the forecasts. Now, write code to compute the four above forecast accuracy measures for one-step return forecasts produced by the best-fitted AR(m)-GJR-GARCH(p, o, q) model found in Task 2.1 and the competing AR(1)-GARCH(1,1) model for each stock in your sample. For each stock, produce a data frame containing the forecast accuracy measures of a similar format to the following, with columns being the names of the above four accuracy measures and index being the names of the competing models under consideration:
|
MAE |
MSE |
MAPE |
MASE |
AR(3)-GJR-GARCH(1,1,2) AR(1)-GARCH(1,1) |
|
|
|
|
Print a statement similar to the following for your stock sample:
For GILD:
Measures that AR(3)-GJR-GARCH(1,1,2) model produces smaller than AR(1)-GARCH(1,1) model:
Measures that AR(1)-GARCH(1,1) model produces smaller than
AR(3)-GJR-GARCH(1,1,2) model: MAE, MSE, MAPE, MASE (7 marks)
3.4: Using a 5% significance level, conduct the Diebold-Mariano test for each stock in your sample to test if the one-step return forecasts produced by the best-fitted
AR(m)-GJR-GARCH(p, o, q) model found in Task 2.1 and the competing AR(1)- GARCH(1,1) model are equally accurate based on the four accuracy measures in Task 3.3. For each stock, produce a data frame containing the forecast accuracy measures of a similar format to the following:
|
MAE |
MSE |
MAPE |
MASE |
AR(3)-GJR-GARCH(1,1,2) |
|
|
|
|
AR(1)-GARCH(1,1) |
|
|
|
|
DMm |
|
|
|
|
pvalue |
|
|
|
|
where ‘DMm’ is the Harvey, Leybourne & Newbold (1997) modified Diebold-
Mariano test statistic (defined in the lecture), and ‘pvalue’ is the p-value associ-
ated with the DMm statistic. Draw and print conclusions whether the best-
fitted AR(m)-GJR-GARCH(p, o, q) model produces equally accurate, signifi-
cantly less accurate or significantly more accurate one-step return forecasts than
the competing AR(1)-GARCH(1,1) model based on each accuracy measure for
your stock sample.
Your printed conclusions should look similar to the following:
For GILD:
Model AR(3)-GJR-GARCH(1,1,2) produces significantly less accurate
one-step return forecasts than model AR(1)-GARCH(1,1) based on MAE .
Model AR(3)-GJR-GARCH(1,1,2) produces equally accurate
one-step returns forecasts as model AR(1)-GARCH(1,1) based on MSE .
Model AR(3)-GJR-GARCH(1,1,2) produces significantly less accurate
one-step return forecasts than model AR(1)-GARCH(1,1) based on MAPE .
Model AR(3)-GJR-GARCH(1,1,2) produces significantly less accurate
one-step return forecasts than model AR(1)-GARCH(1,1) based on MASE .
(6 marks)
Task 4: (Σ = 20 marks)
These marks will go to programs that are well structured, intuitive to use (i.e.
provide sufficient comments for me to follow and are straightforward for me to
run your code), generalisable (i.e. they can be applied to different sets of stocks
(2 or more)) and elegant (i.e. code is neat and shows some degree of efficiency).