TTP PNR filterPNR filter uses the "percentile nearest rank" method to produce signals from any source including oscillator indicators and price bars.
Features:
* Length - how many candles back in time to use for calculating PNR
* % low and high - what range of the spread of values captured will form the PNR band. Use 99&100 to create a band on the 1% highest percentile or 0&1 to create a band in the lowest percentile. It accepts float numbers so you can find very rare occurrences.
* src - by default it will use the close price but PNR filter can be used with any source. It's particularly useful when working with oscillators like RSI, MACD, ADX, etc.
* Signal direction - The indicator will print 1 when the selected conditions are met. Once the PNR band is plotted you can chose from cross over, cross under, above and below conditions to trigger a signal.
* Signal source - the band consists in a % low and % high, this option allows you to pick which band will be used with the "signal direction" parameter.
Example configuration:
1) Select 200 as the length
2) Select % low 0 and % high 1
3) Add RSI to the chart and select it as the source parameter
4) Select signal direction cross over
5) Select signal source % high which corresponds to the 1% band
In this setup you are finding values of RSI that in the past 200 candles have been that low only 1% of the time. With each new candle the calculation window will move as well leaving the oldest candle out.
Statistics
lib_profileLibrary "lib_profile"
a library with functions to calculate a volume profile for either a set of candles within the current chart, or a single candle from its lower timeframe security data. All you need is to feed the
method delete(this)
deletes this bucket's plot from the chart
Namespace types: Bucket
Parameters:
this (Bucket)
method delete(this)
Namespace types: Profile
Parameters:
this (Profile)
method delete(this)
Namespace types: Bucket
Parameters:
this (Bucket )
method delete(this)
Namespace types: Profile
Parameters:
this (Profile )
method update(this, top, bottom, value, fraction)
updates this bucket's data
Namespace types: Bucket
Parameters:
this (Bucket)
top (float)
bottom (float)
value (float)
fraction (float)
method update(this, tops, bottoms, values)
update this Profile's data (recalculates the whole profile and applies the result to this object) TODO optimisation to calculate this incremental to improve performance in realtime on high resolution
Namespace types: Profile
Parameters:
this (Profile)
tops (float ) : array of range top/high values (either from ltf or chart candles using history() function
bottoms (float ) : array of range bottom/low values (either from ltf or chart candles using history() function
values (float ) : array of range volume/1 values (either from ltf or chart candles using history() function (1s can be used for analysing candles in bucket/price range over time)
method tostring(this)
allows debug print of a bucket
Namespace types: Bucket
Parameters:
this (Bucket)
method draw(this, start_t, start_i, end_t, end_i, args, line_color)
allows drawing a line in a Profile, representing this bucket and it's value + it's value's fraction of the Profile total value
Namespace types: Bucket
Parameters:
this (Bucket)
start_t (int) : the time x coordinate of the line's left end (depends on the Profile box)
start_i (int) : the bar_index x coordinate of the line's left end (depends on the Profile box)
end_t (int) : the time x coordinate of the line's right end (depends on the Profile box)
end_i (int) : the bar_index x coordinate of the line's right end (depends on the Profile box)
args (LineArgs type from robbatt/lib_plot_objects/24) : the default arguments for the line style
line_color (color) : the color override for POC/VAH/VAL lines
method draw(this, forced_width)
draw all components of this Profile (Box, Background, Bucket lines, POC/VAH/VAL overlay levels and labels)
Namespace types: Profile
Parameters:
this (Profile)
forced_width (int) : allows to force width of the Profile Box, overrides the ProfileArgs.default_size and ProfileArgs.extend arguments (default: na)
method init(this)
Namespace types: ProfileArgs
Parameters:
this (ProfileArgs)
method init(this)
Namespace types: Profile
Parameters:
this (Profile)
profile(tops, bottoms, values, resolution, vah_pc, val_pc, bucket_buffer)
split a chart/parent bar into 'resolution' sections, figure out in which section the most volume/time was spent, by analysing a given set of (intra)bars' top/bottom/volume values. Then return price center of the bin with the highest volume, essentially marking the point of control / highest volume (poc) in the chart/parent bar.
Parameters:
tops (float ) : array of range top/high values (either from ltf or chart candles using history() function
bottoms (float ) : array of range bottom/low values (either from ltf or chart candles using history() function
values (float ) : array of range volume/1 values (either from ltf or chart candles using history() function (1s can be used for analysing candles in bucket/price range over time)
resolution (int) : amount of buckets/price ranges to sort the candle data into (analyse how much volume / time was spent in a certain bucket/price range) (default: 25)
vah_pc (float) : a threshold percentage (of values' total) for the top end of the value area (default: 80)
val_pc (float) : a threshold percentage (of values' total) for the bottom end of the value area (default: 20)
bucket_buffer (Bucket ) : optional buffer of empty Buckets to fill, if omitted a new one is created and returned. The buffer length must match the resolution
Returns: poc (price level), vah (price level), val (price level), poc_index (idx in buckets), vah_index (idx in buckets), val_index (idx in buckets), buckets (filled buffer or new)
create_profile(start_idx, tops, bottoms, values, resolution, vah_pc, val_pc, args)
split a chart/parent bar into 'resolution' sections, figure out in which section the most volume/time was spent, by analysing a given set of (intra)bars' top/bottom/volume values. Then return price center of the bin with the highest volume, essentially marking the point of control / highest volume (poc) in the chart/parent bar.
Parameters:
start_idx (int) : the bar_index at which the Profile should start drawing
tops (float ) : array of range top/high values (either from ltf or chart candles using history() function
bottoms (float ) : array of range bottom/low values (either from ltf or chart candles using history() function
values (float ) : array of range volume/1 values (either from ltf or chart candles using history() function (1s can be used for analysing candles in bucket/price range over time)
resolution (int) : amount of buckets/price ranges to sort the candle data into (analyse how much volume / time was spent in a certain bucket/price range) (default: 25)
vah_pc (float) : a threshold percentage (of values' total) for the top end of the value area (default: 80)
val_pc (float) : a threshold percentage (of values' total) for the bottom end of the value area (default: 20)
args (ProfileArgs)
Returns: poc (price level), vah (price level), val (price level), poc_index (idx in buckets), vah_index (idx in buckets), val_index (idx in buckets), buckets (filled buffer or new)
history(src, len, offset)
allows fetching an array of values from the history series with offset from current candle
Parameters:
src (int)
len (int)
offset (int)
history(src, len, offset)
allows fetching an array of values from the history series with offset from current candle
Parameters:
src (float)
len (int)
offset (int)
history(src, len, offset)
allows fetching an array of values from the history series with offset from current candle
Parameters:
src (bool)
len (int)
offset (int)
history(src, len, offset)
allows fetching an array of values from the history series with offset from current candle
Parameters:
src (string)
len (int)
offset (int)
Bucket
Fields:
idx (series int) : the index of this Bucket within the Profile starting with 0 for the lowest Bucket at the bottom of the Profile
value (series float) : the value of this Bucket, can be volume or time, for using time pass and array of 1s to the update function
top (series float) : the top of this Bucket's price range (for calculation)
btm (series float) : the bottom of this Bucket's price range (for calculation)
center (series float) : the center of this Bucket's price range (for plotting)
fraction (series float) : the fraction this Bucket's value is compared to the total of the Profile
plot_bucket_line (Line type from robbatt/lib_plot_objects/24) : the line that resembles this bucket and it's valeu in the Profile
ProfileArgs
Fields:
show_poc (series bool) : whether to plot a POC line across the Profile Box (default: true)
show_profile (series bool) : whether to plot a line for each Bucket in the Profile Box, indicating the value per Bucket (Price range), e.g. volume that occured in a certain time and price range (default: false)
show_va (series bool) : whether to plot a VAH/VAL line across the Profile Box (default: false)
show_va_fill (series bool) : whether to fill the 'value' area between VAH/VAL line (default: false)
show_background (series bool) : whether to fill the Profile Box with a background color (default: false)
show_labels (series bool) : whether to add labels to the right end of the POC/VAH/VAL line (default: false)
show_price_levels (series bool) : whether add price values to the labels to the right end of the POC/VAH/VAL line (default: false)
extend (series bool) : whether extend the Profile Box to the current candle (default: false)
default_size (series int) : the default min. width of the Profile Box (default: 30)
args_poc_line (LineArgs type from robbatt/lib_plot_objects/24) : arguments for the poc line plot
args_va_line (LineArgs type from robbatt/lib_plot_objects/24) : arguments for the va line plot
args_poc_label (LabelArgs type from robbatt/lib_plot_objects/24) : arguments for the poc label plot
args_va_label (LabelArgs type from robbatt/lib_plot_objects/24) : arguments for the va label plot
args_profile_line (LineArgs type from robbatt/lib_plot_objects/24) : arguments for the Bucket line plots
args_profile_bg (BoxArgs type from robbatt/lib_plot_objects/24)
va_fill_color (series color) : color for the va area fill plot
Profile
Fields:
start (series int) : left x coordinate for the Profile Box
end (series int) : right x coordinate for the Profile Box
resolution (series int) : the amount of buckets/price ranges the Profile will dissect the data into
vah_threshold_pc (series float) : the percentage of the total data value to mark the upper threshold for the main value area
val_threshold_pc (series float) : the percentage of the total data value to mark the lower threshold for the main value area
args (ProfileArgs) : the style arguments for the Profile Box
h (series float) : the highest price of the data
l (series float) : the lowest price of the data
total (series float) : the total data value (e.g. volume of all candles, or just one each to analyse candle distribution over time)
buckets (Bucket ) : the Bucket objects holding the data for each price range bucket
poc_bucket_index (series int) : the Bucket index in buckets, that holds the poc Bucket
vah_bucket_index (series int) : the Bucket index in buckets, that holds the vah Bucket
val_bucket_index (series int) : the Bucket index in buckets, that holds the val Bucket
poc (series float) : the according price level marking the Point Of Control
vah (series float) : the according price level marking the Value Area High
val (series float) : the according price level marking the Value Area Low
plot_poc (Line type from robbatt/lib_plot_objects/24)
plot_vah (Line type from robbatt/lib_plot_objects/24)
plot_val (Line type from robbatt/lib_plot_objects/24)
plot_poc_label (Label type from robbatt/lib_plot_objects/24)
plot_vah_label (Label type from robbatt/lib_plot_objects/24)
plot_val_label (Label type from robbatt/lib_plot_objects/24)
plot_va_fill (LineFill type from robbatt/lib_plot_objects/24)
plot_profile_bg (Box type from robbatt/lib_plot_objects/24)
Percentile Based Trend StrengthThe "Percentile Based Trend Strength" (PBTS) calculates trend strength based on percentile values of high and low prices for various length periods and then identifies the current trend as either Bullish, Bearish, or N/A (No Trend). Here's a step-by-step explanation of the code:
Percentile Calculations:
For each specified length period (13, 21, 34, 55, 89, and 144 - Fibonacci numbers), the code calculates the 75th percentile of high prices (e.g., percentile_13H) and the 25th percentile of low prices (e.g., percentile_13L). These percentiles represent levels that prices need to exceed or fall below to indicate a strong trend.
Calculate Highest High and Lowest Low:
The highest high (75th percentile high price of longest length) and lowest low (25th percentile low price of longest length) for the longest length period (144) are calculated as highest_high and lowest_low. These values represent threshold price levels .
Trend Strength Conditions:
The code calculates various conditions to determine trend strength. For each percentile value and each length period, it checks if the percentile value is greater than the highest high (trendBull) or less than the lowest low (trendBear). These conditions are used to assess the strength of the bullish and bearish trends.
Count Bull and Count Bear:
The countBull and countBear variables count the number of bullish and bearish conditions met, respectively. These counts help evaluate trend strength.
Weak Bull and Weak Bear Count:
The code calculates the number of weak bullish and bearish conditions. Weak conditions occur when a percentile value falls within the range defined by the highest high and lowest low but doesn't meet the strong trend criteria.
Bull Strength and Bear Strength:
bullStrength and bearStrength are calculated based on the counts of bullish, bearish, weak bullish, and weak bearish conditions. These values represent the overall strength of the bullish and bearish trends.
Strong Bull and Bear Conditions:
These conditions occur when the 75th percentile of high prices (for bull conditions) or the 25th percentile of low prices (for bear conditions) exceeds or falls below the highest high or lowest low, respectively, for the specified length period.
Strong bull conditions indicate a strong upward trend, while strong bear conditions indicate a strong downward trend.
Strong conditions are indicative of more significant price movements and are considered as primary signals of trend strength.
Weak Bull and Bear Conditions:
Weak bull and bear conditions are more nuanced. They occur when the 75th percentile of high prices (for weak bull conditions) or the 25th percentile of low prices (for weak bear conditions) falls within the range defined by the highest high and lowest low for the specified length period.
In other words, prices are not strong enough to reach the extreme levels represented by the highest high or lowest low, but they still exhibit some bullish or bearish tendencies within that range.
Weak conditions suggest a less robust trend. They may indicate that while there is some bias toward a bullish or bearish trend, it is not as strong or decisive as in the case of strong conditions.
Current Trend Identification:
The current trend is determined by comparing bullStrength and bearStrength. If bullStrength is greater, it's considered a Bull trend; if bearStrength is greater, it's a Bear trend. If they are equal, the trend is identified as N/A (No Trend).
Displaying Trend Information:
The code creates a table to display the current trend, reversal probability (strength), count of bullish and bearish conditions, weak bullish and weak bearish counts, and colors the text accordingly.
Plotting Percentiles:
Finally, the code plots the percentile lines for visualization, with 20% transparency. It also plots the highest high and lowest low lines (75th and 25th percentile of the longest length 144) using their original colors.
In summary, this indicator calculates trend strength based on percentile levels of high and low prices for different length periods. It then counts the number of bullish and bearish conditions, factors in weak conditions, and compares the strengths to identify the current trend as Bullish, Bearish, or No Trend. It provides a table with trend information and visualizes percentile lines on the chart.
Strategy Gaussian Anomaly DerivativeConcept behind this Strategy :
Considering a normal "buy/sell" situation, an asset would be bought in average at the median price following a Gaussian like concept. A higher or lower average trend would significate that the current perceived value is respectively higher or lower than the current median price, which mean that the buyers are evaluating the price underpriced or overpriced.
This behaviour would be even more relevent depending on its derivative evolution.
Therefore, this Strategy setup is based on this Gaussian like concept anomaly of average close positionning compare to high-low average derivative, such as the derivative of the following ploted basic signal : 1-(high+low)/(2*close).
This Strategy can actually be used like a trend change and continuation strength indicator aswell.
In the Setup Signal part :
You can define the filtering of the basis signal "1-(high+low)/(2*close)" on EMA or SMA as you wish.
You can define the corresponding period and the threathold as a mutiply of the average 1/3 of all time value of the basis signal.
You can define the SMA filtering period of the Derivative signal and the corresponding threathold on the same mutiply of the average 1/3 of all time value of the derivative.
In the Setup Strategy part :
You can set up your strategy assesment based on Long and/or Short. You can also define the considered period.
The most successful tuned strategies I did were based on the derivative indicator with periods on the basis signal and the derivative under 30, can be 1 to 3 of te derivative and 7 to 21 for the basis signal. The threathold depends on the asset volatility aswell, 1 is usually the most efficient but 0 to 10 can be relevent depending on the situation I met. You can find an example of tuning for this strategy based on Kering's case hereafter.
I hoping that you will enjoy using this Strategy, don't hesitate to comment, to question, to correct or complete it ! I would be very curious about similar famous approaches that would have already been made.
Thank to you !
Paytience DistributionPaytience Distribution Indicator User Guide
Overview:
The Paytience Distribution indicator is designed to visualize the distribution of any chosen data source. By default, it visualizes the distribution of a built-in Relative Strength Index (RSI). This guide provides details on its functionality and settings.
Distribution Explanation:
A distribution in statistics and data analysis represents the way values or a set of data are spread out or distributed over a range. The distribution can show where values are concentrated, values are absent or infrequent, or any other patterns. Visualizing distributions helps users understand underlying patterns and tendencies in the data.
Settings and Parameters:
Main Settings:
Window Size
- Description: This dictates the amount of data used to calculate the distribution.
- Options: A whole number (integer).
- Tooltip: A window size of 0 means it uses all the available data.
Scale
- Description: Adjusts the height of the distribution visualization.
- Options: Any integer between 20 and 499.
Round Source
- Description: Rounds the chosen data source to a specified number of decimal places.
- Options: Any whole number (integer).
Minimum Value
- Description: Specifies the minimum value you wish to account for in the distribution.
- Options: Any integer from 0 to 100.
- Tooltip: 0 being the lowest and 100 being the highest.
Smoothing
- Description: Applies a smoothing function to the distribution visualization to simplify its appearance.
- Options: Any integer between 1 and 20.
Include 0
- Description: Dictates whether zero should be included in the distribution visualization.
- Options: True (include) or False (exclude).
Standard Deviation
- Description: Enables the visualization of standard deviation, which measures the amount of variation or dispersion in the chosen data set.
- Tooltip: This is best suited for a source that has a vaguely Gaussian (bell-curved) distribution.
- Options: True (enable) or False (disable).
Color Options
- High Color and Low Color: Specifies colors for high and low data points.
- Standard Deviation Color: Designates a color for the standard deviation lines.
Example Settings:
Example Usage RSI
- Description: Enables the use of RSI as the data source.
- Options: True (enable) or False (disable).
RSI Length
- Description: Determines the period over which the RSI is calculated.
- Options: Any integer greater than 1.
Using an External Source:
To visualize the distribution of an external source:
Select the "Move to" option in the dropdown menu for the Paytience Distribution indicator on your chart.
Set it to the existing panel where your external data source is placed.
Navigate to "Pin to Scale" and pin the indicator to the same scale as your external source.
Indicator Logic and Functions:
Sinc Function: Used in signal processing, the sinc function ensures the elimination of aliasing effects.
Sinc Filter: A filtering mechanism which uses sinc function to provide estimates on the data.
Weighted Mean & Standard Deviation: These are statistical measures used to capture the central tendency and variability in the data, respectively.
Output and Visualization:
The indicator visualizes the distribution as a series of colored boxes, with the intensity of the color indicating the frequency of the data points in that range. Additionally, lines representing the standard deviation from the mean can be displayed if the "Standard Deviation" setting is enabled.
The example RSI, if enabled, is plotted along with its common threshold lines at 70 (upper) and 30 (lower).
Understanding the Paytience Distribution Indicator
1. What is a Distribution?
A distribution represents the spread of data points across different values, showing how frequently each value occurs. For instance, if you're looking at a stock's closing prices over a month, you may find that the stock closed most frequently around $100, occasionally around $105, and rarely around $110. Graphically visualizing this distribution can help you see the central tendencies, variability, and shape of your data distribution. This visualization can be essential in determining key trading points, understanding volatility, and getting an overview of the market sentiment.
2. The Rounding Mechanism
Every asset and dataset is unique. Some assets, especially cryptocurrencies or forex pairs, might have values that go up to many decimal places. Rounding these values is essential to generate a more readable and manageable distribution.
Why is Rounding Needed? If every unique value from a high-precision dataset was treated distinctly, the resulting distribution would be sparse and less informative. By rounding off, the values are grouped, making the distribution more consolidated and understandable.
Adjusting Rounding: The `Round Source` input allows users to determine the number of decimal places they'd like to consider. If you're working with an asset with many decimal places, adjust this setting to get a meaningful distribution. If the rounding is set too low for high precision assets, the distribution could lose its utility.
3. Standard Deviation and Oscillators
Standard deviation is a measure of the amount of variation or dispersion of a set of values. In the context of this indicator:
Use with Oscillators: When using oscillators like RSI, the standard deviation can provide insights into the oscillator's range. This means you can determine how much the oscillator typically deviates from its average value.
Setting Bounds: By understanding this deviation, traders can better set reasonable upper and lower bounds, identifying overbought or oversold conditions in relation to the oscillator's historical behavior.
4. Resampling
Resampling is the process of adjusting the time frame or value buckets of your data. In the context of this indicator, resampling ensures that the distribution is manageable and visually informative.
Resample Size vs. Window Size: The `Resample Resolution` dictates the number of bins or buckets the distribution will be divided into. On the other hand, the `Window Size` determines how much of the recent data will be considered. It's crucial to ensure that the resample size is smaller than the window size, or else the distribution will not accurately reflect the data's behavior.
Why Use Resampling? Especially for price-based sources, setting the window size around 500 (instead of 0) ensures that the distribution doesn't become too overloaded with data. When set to 0, the window size uses all available data, which may not always provide an actionable insight.
5. Uneven Sample Bins and Gaps
You might notice that the width of sample bins in the distribution is not uniform, and there can be gaps.
Reason for Uneven Widths: This happens because the indicator uses a 'resampled' distribution. The width represents the range of values in each bin, which might not be constant across bins. Some value ranges might have more data points, while others might have fewer.
Gaps in Distribution: Sometimes, there might be no data points in certain value ranges, leading to gaps in the distribution. These gaps are not flaws but indicate ranges where no values were observed.
In conclusion, the Paytience Distribution indicator offers a robust mechanism to visualize the distribution of data from various sources. By understanding its intricacies, users can make better-informed trading decisions based on the distribution and behavior of their chosen data source.
Bursa Malaysia Index SeriesBursa Malaysia Index Series. The index computation is as follows:-
Current aggregate Market Capitalisation/Base Aggregate Market Capitalisation x 100.
The Bursa Malaysia Index Series is calculated and disseminated on a real-time basis at 60-second intervals during Bursa’s trading hours.
Label_Trades Enter your trade information to display on chartThis indicator is an overlay for your main chart. It will display your trade entry and trade close positions on your chart.
After you place the indicator on you shart you will need to enter the trade information that you want to display.
You can open thte input setting by clicking on the gear sprocket that appears when you hover your mouse over the indicator name. There are 7 seting you will want to fill in.
Date and Time Bought
Date and Time Sold
Trade Lot Size
Select whether the trades was 'long' or 'short'
The price for buying the Trade
The price for selling the Trade
On the third tab
The code is straightforward. Using a conditional based on whtehr the trade was 'long' or 'short' determines where the labels will be placed and whether they show a long trade or short trade. It also displays a tool tip when you hover over the label. The tooltip will display the number of lots bought or sold and the price.
The lable.new() function is the meat of the indicator. I will go over a line to explainthe options available.
Pinscript manual(www.tradingview.com)
The function parameters can be called out as in the example above or the values can be placed comma seperated. If you do the latter you must enter the parameters in order. I like anming the parameters as I place them so I can easily see what I did.
label.new(
x=t_bot, // x is the time the transaction occured
y=na, // y is the for the y-axis it is not used here so 'na' tells pinescript to ignore the parameter
xloc=xloc.bar_time, // x_loc is specifying that x is a time value
yloc=yloc.belowbar, // y-loc specifies to place the label under the bar. There are other locations to use. See language reference ((www.tradingview.com)
style=label.style_triangleup, // This parameter selects the lable style. There are many other style to use, see the manual.
color=color.green, // the Label fill color
size=size.small, // the label size
tooltip=str.tostring(lot_size) + " lots bought at $" + str.tostring(bot_val)) // Some parameters are tricky. This one needs to be a string but we are using an integer value(lot_size) and a float value(bol_val). They are all concatenated via the "+" sign. In oorder to do this the numeric values need to be cast or converted into strings. The string function str.tostring() does this.
Z-Score Based Momentum Zones with Advanced Volatility ChannelsThe indicator "Z-Score Based Momentum Zones with Advanced Volatility Channels" combines various technical analysis components, including volatility, price changes, and volume correction, to calculate Z-Scores and determine momentum zones and provide a visual representation of price movements and volatility based on multi timeframe highest high and lowest low values.
Note: THIS IS A IMPROVEMNT OF "Multi Time Frame Composite Bands" INDICATOR OF MINE WITH MORE EMPHASIS ON MOMENTUM ZONES CALULATED BASED ON Z-SCORES
Input Options
look_back_length: This input specifies the look-back period for calculating intraday volatility. correction It is set to a default value of 5.
lookback_period: This input sets the look-back period for calculating relative price change. The default value is 5.
zscore_period: This input determines the look-back period for calculating the Z-Score. The default value is 500.
avgZscore_length: This input defines the length of the momentum block used in calculations, with a default value of 14.
include_vc: This is a boolean input that, if set to true, enables volume correction in the calculations. By default, it is set to false.
1. Volatility Bands (Composite High and Low):
Composite High and Low: These are calculated by combining different moving averages of the high prices (high) and low prices (low). Specifically:
a_high and a_low are calculated as the average of the highest (ta.highest) and lowest (ta.lowest) high and low prices over various look-back periods (5, 8, 13, 21, 34) to capture short and long-term trends.
b_high and b_low are calculated as the simple moving average (SMA) of the high and low prices over different look-back periods (5, 8, 13) to smooth out the trends.
high_c and low_c are obtained by averaging a_high with b_high and a_low with b_low respectively.
IDV Correction Calulation : In this script the Intraday Volatility (IDV) is calculated as the simple moving average (SMA) of the daily high-low price range divided by the closing price. This measures how much the price fluctuates in a given period.
Composite High and Low with Volatility: The final c_high and c_low values are obtained by adjusting high_c and low_c with the calculated intraday volatility (IDV). These values are used to create the "Composite High" and "Composite Low" plots.
Composite High and Low with Volatility Correction: The final c_high and c_low values are obtained by adjusting high_c and low_c with the calculated intraday volatility (IDV). These values are used to create the "Composite High" and "Composite Low" plots.
2. Momentum Blocks Based on Z-Score:
Relative Price Change (RPC):
The Relative Price Change (rpdev) is calculated as the difference between the current high-low-close average (hlc3) and the previous simple moving average (psma_hlc3) of the same quantity. This measures the change in price over time.
Additionally, std_hlc3 is calculated as the standard deviation of the hlc3 values over a specified look-back period. The standard deviation quantifies the dispersion or volatility in the price data.
The rpdev is then divided by the std_hlc3 to normalize the price change by the volatility. This normalization ensures that the price change is expressed in terms of standard deviations, which is a common practice in quantitative analysis.
Essentially, the rpdev represents how many standard deviations the current price is away from the previous moving average.
Volume Correction (VC): If the include_vc input is set to true, volume correction is applied by dividing the trading volume by the previous simple moving average of the volume (psma_volume). This accounts for changes in trading activity.
Volume Corrected Relative Price Change (VCRPD): The vcrpd is calculated by multiplying the rpdev by the volume correction factor (vc). This incorporates both price changes and volume data.
Z-Scores: The Z-scores are calculated by taking the difference between the vcrpd and the mean (mean_vcrpd) and then dividing it by the standard deviation (stddev_vcrpd). Z-scores measure how many standard deviations a value is away from the mean. They help identify whether a value is unusually high or low compared to its historical distribution.
Momentum Blocks: The "Momentum Blocks" are essentially derived from the Z-scores (avgZScore). The script assigns different colors to the "Fill Area" based on predefined Z-score ranges. These colored areas represent different momentum zones:
Positive Z-scores indicate bullish momentum, and different shades of green are used to fill the area.
Negative Z-scores indicate bearish momentum, and different shades of red are used.
Z-scores near zero (between -0.25 and 0.25) suggest neutrality, and a yellow color is used.
Robust Bollinger Bands with Trend StrengthThe "Robust Bollinger Bands with Trend Strength" indicator is a technical analysis tool designed assess price volatility, identify potential trading opportunities, and gauge trend strength. It combines several robust statistical methods and percentile-based calculations to provide valuable information about price movements with Improved Resilience to Noise while mitigating the impact of outliers and non-normality in price data.
Here's a breakdown of how this indicator works and the information it provides:
Bollinger Bands Calculation: Similar to traditional Bollinger Bands, this indicator calculates the upper and lower bands that envelop the median (centerline) of the price data. These bands represent the potential upper and lower boundaries of price movements.
Robust Statistics: Instead of using standard deviation, this indicator employs robust statistical measures to calculate the bands (spread). Specifically, it uses the Interquartile Range (IQR), which is the range between the 25th percentile (low price) and the 75th percentile (high price). Robust statistics are less affected by extreme values (outliers) and data distributions that may not be perfectly normal. This makes the bands more resistant to unusual price spikes.
Median as Centerline: The indicator utilizes the median of the chosen price source (either HLC3 or VWMA) as the central reference point for the bands. The median is less affected by outliers than the mean (average), making it a robust choice. This can help identify the center of price action, which is useful for understanding whether prices are trending or ranging.
Trend Strength Assessment: The indicator goes beyond the standard Bollinger Bands by incorporating a measure of trend strength. It uses a robust rank-based correlation coefficient to assess the relationship between the price source and the bar index (time). This correlation coefficient, calculated over a specified length, helps determine whether a trend is strong, positive (uptrend), negative (down trend), or non-existent and weak. When the rank-based correlation coefficient shifts it indicates exhaustion of a prevailing trend. Trend Strength" indicator is designed to provide statistically valid information about trend strength while minimizing the impact of outliers and data distribution characteristics. The parameter choices, including a length of 14 and a correlation threshold of +/-0.7, considered to offer meaningful insights into market conditions and statistical validity (p-value ,0.05 statistically significant). The use of rank-based correlation is a robust alternative to traditional Pearson correlation, especially in the context of financial markets.
Trend Fill: Based on the robust rank-based correlation coefficient, the indicator fills the area between the upper and lower Bollinger Bands with different colors to visually represent the trend strength. For example, it may use green for an uptrend, red for a down trend, and a neutral color for a weak or ranging market. This visual representation can help traders quickly identify potential trend opportunities. In addition the middle line also informs about the overall trend direction of the median.
Cross Correlation [Kioseff Trading]Hello!
This script "Cross Correlation" calculates up to ~10,000 lag-symbol pair cross correlation values simultaneously!
Cross correlation calculation for 20 symbols simultaneously
+/- Lag Range is theoretically infinite (configurable min/max)
Practically, calculate up to 10000 lag-symbol pairs
Results can be sorted by greatest absolute difference or greatest sum
Ability to "isolate" the symbol on your chart and check for cross correlation against a list of symbols
Script defaults to stock pairs when on a stock, Forex pairs when on a Forex pair, crypto when on a crypto coin, futures when on a futures contract.
A custom symbol list can be used for cross correlation checking
Can check any number of available historical data points for cross correlation
Practical Assessment
Ideally, we can calculate cross correlation to determine if, in a list of assets, any of the assets frequently lead or lag one another.
Example
Say we are comparing the log returns for the previous 10 days for SPY and XLU.
*A single time-interval corresponds to the timeframe of your chart i.e. 1-minute chart = 1-minute time interval. We're using days for this example.
(Example Results)
A lag value (k) +/-3 is used.
The cross correlation (normalized) for k = +3 is -0.787
The cross correlation (normalized) for k = -3 is 0.216
A positive "k" value indicates the correlation when Asset A (SPY) leads Asset B (XLU)
A negative "k" value indicates the correlation when Asset B (XLU) leads Asset A (SPY)
A normalized cross correlation of -0.787 for k = +3 indicates an "adequately strong" negative relationship when SPY leads XLU by 3 days.
When SPY increases or decreases - XLU frequently moves in the opposite direction 3 days later.
A cross correlation value of 0.216 at k = −3 indicates a "weak" positive correlation when XLU leads SPY by 3 days.
There's a slight tendency for SPY to move in the same direction as XLU 3 days later.
After the cross-correlation score is normalized it will fall between -1 and 1.
A cross-correlation score of 1 indicates a perfect directional relationship between asset A and asset B at the corresponding lag (k).
A cross correlation of -1 indicates a perfect inverse relationship between asset A and asset B at the corresponding lag (k).
A cross correlation of 0 indicates no correlation at the corresponding lag (k).
The image above shows the primary usage for the script!
The image above further explains the data points located in the table!
The image above shows the script "isolating" the symbol on my chart and checking the cross correlation between the symbol and a list of symbols!
Wrapping Up
With this information, hopefully you can find some meaningful lead-lag relationships amongst assets!
Thank you for checking this out (:
Z-Score Support & Resistance [SS}Hello everyone,
This is the Z-Score Support and Resistance (S/R) indicator.
How it works:
The trouble with most indicators and strategies that rely on distributions is that they are constantly moving targets.
To combat this, what I have done is anchored the assessment of the normal distribution to the period open price and dropped the data from the current day.
This provides us with a static assessment of the current distribution and static target levels.
It then plots out an assessment of what would be neutral (0 Standard Deviations) all the way up to +3 Standard Deviations and all the way down to -3 Standard Deviations.
It can plot out this assessment on any timeframe, from the minutes to the months to the years, simply select which desired timeframe you want in the settings menu (default is 9 which seems to work well for most generic tickers and indicies).
The indicator will also count the number of times a ticker has closed within each designated period. To do this, please make sure that you have the assessment timeframe opened on the chart. So if you want to look at the instances on the daily timeframe, ensure you have the daily timeframe opened. If you want to look on the monthly, ensure you have the monthly opened, etc. (See below):
How to Use:
To use the indicator, its pretty simple.
Simply select the desired timeframe you want to use as S/R and use it!
You can adjust the period lookback from the defaulted 9 period based on:
a) The degree of normality in the dataset (you can use a kurtosis indicator to help you ascertain this); or
b) The back-test results of closes within a desired range.
For the later, you can see an example below:
This is TSLA with a 9 period lookback:
We can see that 50% of closes are happening within 0.5 and -0.5 standard deviations. If we extend this to a 15 period lookback:
Now over 60% of closes are happening in this area.
Why does this matter? Well, because now we know our prime short and long entries (see below):
The green arrows represent prime long setups and the red prime short setups.
This is because we know, 61% of the time the ticker will close between 0.5 and -0.5 standard deviations, so we can trade the ticker back to this area.
Further instructions:
Because it is somewhat of a complex indicator, I have done a tutorial video that I will link below here:
And that is the indicator my friends! Hopefully you enjoy :-).
As always, leave your comments and suggestions / Questions below!
Safe trades!
VWMA/SMA Delta Volatility (Statistical Anomaly Detector)The "VWMA/SMA Delta Volatility (Statistical Anomaly Detector)" indicator is a tool designed to detect and visualize volatility in a financial market's price data. The indicator calculates the difference (delta) between two moving averages (VWMA/SMA) and uses statistical analysis to identify anomalies or extreme price movements. Here's a breakdown of its components:
Hypothesis:
The hypothesis behind this indicator is that extreme price movements or anomalies in the market can be detected by analyzing the difference between two moving averages and comparing it to a statistically derived normal distribution. When the MA delta (the difference between two MAs: VWMA/SMA) exceeds a certain threshold based on standard deviation and the Z-score coefficient, it may indicate increased market volatility or potential trading opportunities.
Calculation of MA Delta:
The indicator calculates the MA delta by subtracting a simple moving average (SMA) from a volume-weighted moving average (VWMA) of a selected price source. This calculation represents the difference in the market's short-term and long-term trends.
Statistical Analysis:
To detect anomalies, the indicator performs statistical analysis on the MA delta. It calculates a moving average (MA) of the MA delta and its standard deviation over a specified sample size. This MA acts as a baseline, and the standard deviation is used to measure how much the MA delta deviates from the mean.
Delta Normalization:
The MA delta, lower filter, and upper filter are normalized using a function that scales them to a specific range, typically from -100 to 100. Normalization helps in comparing these values on a consistent scale and enhances their visual representation.
Visual Representation:
The indicator visualizes the results through histograms and channels:
The histogram bars represent the normalized MA delta. Red bars indicate negative and below-lower-filter values, green bars indicate positive and above-upper-filter values, and silver bars indicate values within the normal range.
It also displays a Z-score channel, which represents the upper and lower filters after normalization. This channel helps traders identify price levels that are statistically significant and potentially indicative of market volatility.
In summary, the "MA Delta Volatility (Statistical Anomaly Detector)" indicator aims to help traders identify abnormal price movements in the market by analyzing the difference between two moving averages and applying statistical measures. It can be a valuable tool for traders looking to spot potential opportunities during periods of increased volatility or to identify potential market anomalies.
Trade Warehouse (SPOT trades)Hello there!
Let's imagine You are trading SPOT, buy more and more every new dump, but bear market is not going to stop... and your first trade was 3 YEARS AGO!!!
Can't believe it is true.
The problem is - exchanges allow You to see only new trades last 6 months(Binance). But I want to see all of them! How do I know AVG Price?
This script is my solution. Just use it to track and store your trade, so You can see AVG without uploading old trades everytime and using calculator.
Script description:
Here You can see the "Trade" type of variable. Python script using Pandas converts trades from .csv file into string type that You can input as trade(price, pair, amount, date..). After it uppends to the trades_array and pushed into the loop.
If trade date is more than current cundle - it pushes new trade to other arrays such a "pair", "avg_tot" etc. to comput it later.
If trade was buy - it increase invested capital and owned amount, opposite for sell and recomputs AVG price.
Since script has at least 1 trade it starts to plot AVG price.
There are 2 AVG price:
1. For total invested counting(You can get negative value if traded successful)
2. Current AVG price since last 0 currency amount(there is dust value to set how many usd we take as dust)
Table represents all assets statistics
Just upload your trades only 1 time, use script to convert it into pine code, and use as indicator. This script allow You to see ALL trades from oldest to the newest.
github.com/Arivadis/...w_Tradings_warehouse
If this script helped You - press Star (on GitHub) Like (on TradingView)
Warning -
Does not include free/earn/withdraw/deposit counting. Only Buy and Sell =>
This script has no idea about your side currency deposits, so if You got Your BTC or EUR or .. from another wallet and sold later - it can break your statistical data. Add this transfer manually(see examples inside script).
Use my github manual to get this script workin.
Installing takes around 3 minutes and contains 3-5 steps
[GTH decimals heatmap] (wide screen advised)Preface
I share my personal general view on indicators below; skip ahead to the Description below if you are not interested.
It is my personal conviction that most - if not all - indicators rely mainly on trader's belief that they work, and in a feedback system like free markets they might become a self-fulfilling prophecy as a result, if (!) a big part of the traders believes in it, because some famous trader releases an indicator, or such person's public statement goes viral.
One of those voodoo indicators is the famous "follow-through day". There is zero statistical evidence for its validity, beyond the validity of a statement like "If it's bright at day it's usually the sun shining". The uselessness was proven exactly on its inventor's YT channel, Investors Business Daily. According to the examiner, its inventor William J. O'Neil himself could not explain the values used for this indicator. It might have been an incidental observation at some point without general validity. A.k.a "curve fitting". Still, it's being used by many today.
Another one of those indicators is the three points reversal on the S&P 500 Volatility Index (VIX) which allegedly might potentially maybe indicate a possible shift in trend. Both indicators share an immediately problematic feature: They use absolute values. Nothing is ever absolute in a highly subjective and emotionally driven game like the markets where a lot of money can be made and lost.
Most indicators can not produce additional information since they can only re-pack price/volume action. Many times an interpretion of the distance between price and a moving average and/or the slope of a moving average deliver very similar - if not better - results than MACD, RSI etc., especially with standard settings, the origin of which are usually unknown (always a warning sign). Very few indicators can deliver information which is otherwise hard to quantify, e. g. market noise (Kaufman's Efficiency Ratio or Price Density) or volatility, standard deviation etc.
It is common knowledge that trading the markets is a game of probability. No indicator works all the time (or at all, see above). In order to make decisions based on any indicator, the probability for its validity and the conditions under which validity seemed to have occurred, must be known. Otherwise it is just coffee grounds reading under the illusion of adding to the edge, when in fact it is only adding to the trees, making it even harder to see the forest.
Description
A common belief is that whole or half-dollar prices tend to be attraction points in price action, so a number of traders include those into decision making. But are they really...?
Spoiler Alert:
Generally, it is safe to say that for the big majority of stocks there is very thin evidence for it. It depends vastly on the asset, the timeframe used and the market period (pre/post/main trading times). If at all, there seems to be an above random but still thin evidence for whole prices being significant attraction points. Interesting/surprising patterns are visible on many stocks/timeframes/session periods, though.
The screenshot shows TSLA, 30m timeframe, two heatmaps added. The top one shows pre/post-market data only, the bottom one main market data only. The cyan fields indicate the strongest occurrence, the dark blue fields indicate the weakest occurrence of open/high/low/close prices at the respective decimal. The red field indicates the current/last price decimal.
Clearly, TSLA displays a strong pre-market attraction for .00, followed by .33 and .67 and .50. This pattern of thirds seems to be a unique feature of TSLA. In the main trading session it is being diluted by a more random distribution.
Other interesting equities to examine:
SPY: No significant pattern on any timeframe!
META: Generally weak patterns on all timeframes, but interestingly on the 1D there is evidence for less randomness on O and H, more on L and most on C.
AAPL: 1D, foggy attraction areas around .35 and .12. Whole price is no attraction area at all! Very weak attraction around .73.
AMD: Strong pattern on D, W, M, attraction areas around 1/16th intervals. No patterns on lower timeframes.
AMZN: Significant differences between pre/post and main session. Strong 1/16th pattern below D in pre/post.
TAOP: Strong 1/5th pattern on all timeframes.
Read the tool tips and go explore!
Candle Tick SizeHello everyone!
I dont think it exists, I couldnt find it any way I searched, maybe it is part of a bigger indicator. This is a really basic code, all it does, it shows the tick/pip size of the candles forming. You can adjust on how many candles should it show. Also because the code counts the point size of the candles from high to low, you can adjust that how many ticks are in one point, like for ES and NQ 4 ticks to a point, which is the basic setting. It helps me with entrys when I calculate the contract size so my risk/reward stays pretty much the same depending on the candle size for my entrys.
Multi-Asset Performance [Spaghetti] - By LeviathanThis indicator visualizes the cumulative percentage changes or returns of 30 symbols over a given period and offers a unique set of tools and data analytics for deeper insight into the performance of different assets.
Multi Asset Performance indicator (also called “Spaghetti”) makes it easy to monitor the changes in Price, Open Interest, and On Balance Volume across multiple assets simultaneously, distinguish assets that are overperforming or underperforming, observe the relative strength of different assets or currencies, use it as a tool for identifying mean reversion opportunities and even for constructing pairs trading strategies, detect "risk-on" or "risk-off" periods, evaluate statistical relationships between assets through metrics like correlation and beta, construct hedging strategies, trade rotations and much more.
Start by selecting a time period (e.g., 1 DAY) to set the interval for when data is reset. This will provide insight into how price, open interest, and on-balance volume change over your chosen period. In the settings, asset selection is fully customizable, allowing you to create three groups of up to 30 tickers each. These tickers can be displayed in a variety of styles and colors. Additional script settings offer a range of options, including smoothing values with a Simple Moving Average (SMA), highlighting the top or bottom performers, plotting the group mean, applying heatmap/gradient coloring, generating a table with calculations like beta, correlation, and RSI, creating a profile to show asset distribution around the mean, and much more.
One of the most important script tools is the screener table, which can display:
🔸 Percentage Change (Represents the return or the percentage increase or decrease in Price/OI/OBV over the current selected period)
🔸 Beta (Represents the sensitivity or responsiveness of asset's returns to the returns of a benchmark/mean. A beta of 1 means the asset moves in tandem with the market. A beta greater than 1 indicates the asset is more volatile than the market, while a beta less than 1 indicates the asset is less volatile. For example, a beta of 1.5 means the asset typically moves 150% as much as the benchmark. If the benchmark goes up 1%, the asset is expected to go up 1.5%, and vice versa.)
🔸 Correlation (Describes the strength and direction of a linear relationship between the asset and the mean. Correlation coefficients range from -1 to +1. A correlation of +1 means that two variables are perfectly positively correlated; as one goes up, the other will go up in exact proportion. A correlation of -1 means they are perfectly negatively correlated; as one goes up, the other will go down in exact proportion. A correlation of 0 means that there is no linear relationship between the variables. For example, a correlation of 0.5 between Asset A and Asset B would suggest that when Asset A moves, Asset B tends to move in the same direction, but not perfectly in tandem.)
🔸 RSI (Measures the speed and change of price movements and is used to identify overbought or oversold conditions of each asset. The RSI ranges from 0 to 100 and is typically used with a time period of 14. Generally, an RSI above 70 indicates that an asset may be overbought, while RSI below 30 signals that an asset may be oversold.)
⚙️ Settings Overview:
◽️ Period
Periodic inputs (e.g. daily, monthly, etc.) determine when the values are reset to zero and begin accumulating again until the period is over. This visualizes the net change in the data over each period. The input "Visible Range" is auto-adjustable as it starts the accumulation at the leftmost bar on your chart, displaying the net change in your chart's visible range. There's also the "Timestamp" option, which allows you to select a specific point in time from where the values are accumulated. The timestamp anchor can be dragged to a desired bar via Tradingview's interactive option. Timestamp is particularly useful when looking for outperformers/underperformers after a market-wide move. The input positioned next to the period selection determines the timeframe on which the data is based. It's best to leave it at default (Chart Timeframe) unless you want to check the higher timeframe structure of the data.
◽️ Data
The first input in this section determines the data that will be displayed. You can choose between Price, OI, and OBV. The second input lets you select which one out of the three asset groups should be displayed. The symbols in the asset group can be modified in the bottom section of the indicator settings.
◽️ Appearance
You can choose to plot the data in the form of lines, circles, areas, and columns. The colors can be selected by choosing one of the six pre-prepared color palettes.
◽️ Labeling
This input allows you to show/hide the labels and select their appearance and size. You can choose between Label (colored pointed label), Label and Line (colored pointed label with a line that connects it to the plot), or Text Label (colored text).
◽️ Smoothing
If selected, this option will smooth the values using a Simple Moving Average (SMA) with a custom length. This is used to reduce noise and improve the visibility of plotted data.
◽️ Highlight
If selected, this option will highlight the top and bottom N (custom number) plots, while shading the others. This makes the symbols with extreme values stand out from the rest.
◽️ Group Mean
This input allows you to select the data that will be considered as the group mean. You can choose between Group Average (the average value of all assets in the group) or First Ticker (the value of the ticker that is positioned first on the group's list). The mean is then used in calculations such as correlation (as the second variable) and beta (as a benchmark). You can also choose to plot the mean by clicking on the checkbox.
◽️ Profile
If selected, the script will generate a vertical volume profile-like display with 10 zones/nodes, visualizing the distribution of assets below and above the mean. This makes it easy to see how many or what percentage of assets are outperforming or underperforming the mean.
◽️ Gradient
If selected, this option will color the plots with a gradient based on the proximity of the value to the upper extreme, zero, and lower extreme.
◽️ Table
This section includes several settings for the table's appearance and the data displayed in it. The "Reference Length" input determines the number of bars back that are used for calculating correlation and beta, while "RSI Length" determines the length used for calculating the Relative Strength Index. You can choose the data that should be displayed in the table by using the checkboxes.
◽️ Asset Groups
This section allows you to modify the symbols that have been selected to be a part of the 3 asset groups. If you want to change a symbol, you can simply click on the field and type the ticker of another one. You can also show/hide a specific asset by using the checkbox next to the field.
Pro Bollinger Bands CalculatorThe "Pro Bollinger Bands Calculator" indicator joins our suite of custom trading tools, which includes the "Pro Supertrend Calculator", the "Pro RSI Calculator" and the "Pro Momentum Calculator."
Expanding on this series, the "Pro Bollinger Bands Calculator" is tailored to offer traders deeper insights into market dynamics by harnessing the power of the Bollinger Bands indicator.
Its core mission remains unchanged: to scrutinize historical price data and provide informed predictions about future price movements, with a specific focus on detecting potential bullish (green) or bearish (red) candlestick patterns.
1. Bollinger Bands Calculation:
The indicator kicks off by computing the Bollinger Bands, a well-known volatility indicator. It calculates two pivotal Bollinger Bands parameters:
- Bollinger Bands Length: This parameter sets the lookback period for Bollinger Bands calculations.
- Bollinger Bands Deviation: It determines the deviation multiplier for the upper and lower bands, typically set at 2.0.
2. Visualizing Bollinger Bands:
The Bollinger Bands derived from the calculations are skillfully plotted on the price chart:
- Red Line: Represents the upper Bollinger Band during bearish trends, suggesting potential price declines.
- Teal Line: Represents the lower Bollinger Band in bullish market conditions, signaling the possibility of price increases.
3.Analyzing Consecutive Candlesticks:
The indicator's core functionality revolves around tracking consecutive candlestick patterns based on their relationship with the Bollinger Bands lines. To be considered for analysis, a candlestick must consistently close either above (green candles) or below (red candles) the Bollinger Bands lines for multiple consecutive periods.
4. Labeling and Enumeration:
To convey the count of consecutive candles displaying consistent trend behavior, the indicator meticulously assigns labels to the price chart. The position of these labels varies depending on the direction of the trend, appearing either below (for bullish patterns) or above (for bearish patterns) the candlesticks. The label colors match the candle colors: green labels for bullish candles and red labels for bearish ones.
5. Tabular Data Presentation:
The indicator complements its graphical analysis with a customizable table that prominently displays comprehensive statistical insights. Key data points within the table encompass:
- Consecutive Candles: The count of consecutive candles displaying consistent trend characteristics.
- Candles Above Upper BB: The number of candles closing above the upper Bollinger Band during the consecutive period.
- Candles Below Lower BB: The number of candles closing below the lower Bollinger Band during the consecutive period.
- Upcoming Green Candle: An estimated probability of the next candlestick being bullish, derived from historical data.
- Upcoming Red Candle: An estimated probability of the next candlestick being bearish, also based on historical data.
6. Custom Configuration:
To cater to diverse trading strategies and preferences, the indicator offers extensive customization options. Traders can fine-tune parameters such as Bollinger Bands length, upper and lower band deviations, label and table placement, and table size to align with their unique trading approaches.
The Next Pivot [Kioseff Trading]Hello!
This script "The Next Pivot" uses various similarity measures to compare historical price sequences to the current price sequence!
Features
Find the most similar price sequence up to 100 bars from the current bar
Forecast price path up to 250 bars
Forecast ZigZag up to 250 bars
Spearmen
Pearson
Absolute Difference
Cosine Similarity
Mean Squared Error
Kendall
Forecasted linear regression channel
The image above shows/explains some of the indicator's capabilities!
The image above highlights the projected zig zag (pivots) pattern!
Colors are customizable (:
Additionally, you can plot a forecasted LinReg channel.
Should load times permit it, the script can search all bar history for a correlating sequence. This won't always be possible, contingent on the forecast length, correlation length, and the number of bars on the chart.
Reasonable Assessment
The script uses various similarity measures to find the "most similar" price sequence to what's currently happening. Once found, the subsequent price move (to the most similar sequence) is recorded and projected forward.
So,
1: Script finds most similar price sequence
2: Script takes what happened after and projects forward
While this may be useful, the projection is simply the reaction to a possible one-off "similarity" to what's currently happening. Random fluctuations are likely and, if occurring, similarities between the current price sequence and the "most similar" sequence are plausibly coincidental.
That said, if you have any ideas on cool features to add please let me know!
Thank you (:
Position and Risk Calculator (for Indices) [dR-Algo]Position and Risk Calculator : Your Ultimate Risk Management Tool for Indices
The difference between a novice and a seasoned trader often comes down to one essential element: risk management. While trading indices, the challenges are even more intense due to market volatility and leverage. The Position and Risk Calculator steps in here to bridge the gap, providing you with an efficient tool designed exclusively for indices trading.
Key Features:
User-Friendly Interface: Designed to integrate effortlessly with your TradingView chart, this tool's interface is intuitive and clutter-free.
Dynamic Price Level Adjustment: Move your Entry, Stop Loss, and Take Profit levels directly on the chart for an interactive experience.
Account Balance Input: Customize the tool to understand your unique financial situation by inputting your current account balance.
Trade Risk Customization: Define how much you're willing to risk per trade, and the tool will do the rest.
Automated Calculations: The indicator calculates the maximum monetary risk and translates it into the maximum lot size you can afford. It delivers a full-integer lot size to make your trading decisions easier.
Comprehensive Risk Evaluation: Beyond lot sizes, it provides you with the Cost-to-Reward Ratio (CRV) of your trade, the actual monetary risk according to the calculated lot size, and the potential profit.
How To Use:
Once you add the Position and Risk Calculator to your TradingView chart, a new interactive panel appears. Here’s how it works:
Set Price Levels: Using draggable lines on the chart, set your Entry Price, Stop Loss, and Take Profit levels.
Account Details: Go to settings and enter your Account Balance and your desired risk percentage per trade.
Automatic Calculations: As soon as the above details are set, the indicator goes to work. It first calculates your maximum risk in monetary terms and then translates that into the maximum lot size you can take for the trade.
Review and Trade: The indicator shows you all the vital statistics - CRV of the trade, the money at risk according to the calculated lot size, and the possible profit.
Why Choose This Tool?
Informed Decisions: Your trading decisions will be based on concrete numbers, removing guesswork.
Time-saving: No need for manual calculations or using separate tools; everything is in one place.
Focus on Trading: By automating the risk management aspect, this tool allows you to focus more on your trading strategy and market analysis.
Tailor-Made for Indices: Unlike many other tools that try to serve all markets, the Position and Risk Calculator is designed specifically for indices trading.
Remember, effective risk management is what separates successful traders from those who burn out. The Position and Risk Calculator not only helps you define your risk but also helps you understand it, empowering you to trade with confidence.
So why not give yourself the best chance of success? Add the Position and Risk Calculator to your TradingView setup and experience the difference it can make.
Ticker Correlation Matrix Table and Heatmap [SS]Hello everyone,
I am in the process of releasing some of my own utility indicators/things I use to reference and perform analyses.
I do a lot of quantitative/math based analyses, including correlation assessments that I traditionally would need to export data from Tradingview and perform in SPSS, Excel or R. I have been slowly building a repertoire of Excel/R functionality right on pinescript so I do not need to constantly export data and can perform the assessments right on Tradingview.
This is an example of such an indicator.
About the Indicator:
It is a correlation table/matrix indicator. It will allow up to 10 ticker inputs, which can be stocks, economic data, anything available on Tradingview, and it will perform a correlation assessment in a matrix / heatmap style.
The indicator will show the various correlations among all of the selected ticker inputs and will colour them based on correlation strength and type.
Strong negative correlations will appear bright red.
Strong positive correlations will appear bright green.
Complete absence of correlation (i.e. 0) will show bright orange.
The rest will show a darker shade to indicate less strength/correlation.
Calculation Functions
In addition to outputting a correlation matrix, the indicator is also able to express the relationship between tickers in a linear expression using the y = mx + b formula.
If we look at table, we can see that MSFT and AAPL have a significantly strong correlation of 0.82.
If we wanted to express this relationship mathmatically, we can ask the indicator to represent the linear relationship in our y = mx + b format. We simply toggle to our menu and select the Convert From MSFT (Ticker 2) and convert to APPL (Ticker 3):
When we select this, a new table will populate below and give you the expression as well as the amount of error associated with it:
In this case, we can see that the equation is y = 0.553x + 0.626 with a range of around 10 points in either direction.
This means that, to convert MSFT to AAPL, we would multiply the MSFT price by 0.553 and then add 0.626. So if we try it, MSFT closed at 328.41. So we substitute:
AAPL price = 0.553(328.41) + 0.626
AAPL price = 181.61 + 0.626
AAPL Price = 182.24 +/- 10
AAPL actually closed at 184.12. So pretty good. If we try another, let's do SPY to XLF:
So we substitute, SPY closed at 449.16.
XLF Price = 449.16(0.077) + 0.084
XLF price = 34.59 + 0.084
XLF price = 34.67
XLF actually closed at 34.49.
This is handy if you want to see how one stock price may affect another. If you are long on one stock and short on another, you can use this to determine what the likely outcome may be for the alternative stock. However, I recommend only performing this on tickers that have a relationship of 0.7 or higher, or a relationship of -0.7 or lower.
I always had to use SPSS to do this, so being able to do this right in Pinescript for me is a huge convenience!
Some other uses:
As I tend to post educational stuff on Tradingview and I frequently use correlation matrices, I have formatted the indicator to be more aesthetically pleasing for these purposes. Thus, you can unselect extra ticker slots that you do not need. IF I only need to display 3 tickers, I can unselect tickers 4 - 10. The end result is a cleaner table:
Essential Functions:
The assessment length is defaulted to 75 candles on the daily timeframe. Be sure to have the daily timeframe opened when you are viewing the indicator.
You can increase or decrease the assessment length as you desire.
You can also specify the source. The source is defaulted to close, but if you want to see the direct correlation of ticker's highs and/or lows, you can modify the source input in the settings menu to look at this.
Just remember to have the chart opened to whatever timeframe you are looking at.
And that's the indicator! Hopefully you find it helpful. Its more of an academic indicator, but it is performing a function that I personally use frequently in analyses, so I hope you may also benefit from it as well!
Thanks for checking it out! Safe trades everyone!
Bollinger Bands Heatmap (BBH)The Bollinger Bands Heatmap (BBH) Indicator provides a unique visualization of Bollinger Bands by displaying the full distribution of prices as a heatmap overlaying your price chart. Unlike traditional Bollinger Bands, which plot the mean and standard deviation as lines, BBH illustrates the entire statistical distribution of prices based on a normal distribution model.
This heatmap indicator offers traders a visually appealing way to understand the probabilities associated with different price levels. The lower the weight of a certain level, the more transparent it appears on the heatmap, making it easier to identify key areas of interest at a glance.
Key Features
Dynamic Heatmap: Changes in real-time as new price data comes in.
Fully Customizable: Adjust the scale, offset, alpha, and other parameters to suit your trading style.
Visually Engaging: Uses gradients of colors to distinguish between high and low probabilities.
Settings
Scale
Tooltip: Scale the size of the heatmap.
Purpose: The 'Scale' setting allows you to adjust the dimensions of each heatmap box. A higher value will result in larger boxes and a more generalized view, while a lower value will make the boxes smaller, offering a more detailed look at price distributions.
Values: You can set this from a minimum of 0.125, stepping up by increments of 0.125.
Scale ATR Length
Tooltip: The ATR used to scale the heatmap boxes.
Purpose: This setting is designed to adapt the heatmap to the instrument's volatility. It determines the length of the Average True Range (ATR) used to size the heatmap boxes.
Values: Minimum allowable value is 5. You can increase this to capture more bars in the ATR calculation for greater smoothing.
Offset
Tooltip: Offset mean by ATR.
Purpose: The 'Offset' setting allows you to shift the mean value by a specified ATR. This could be useful for strategies that aim to capitalize on extreme price movements.
Values: The value can be any floating-point number. Positive values shift the mean upward, while negative values shift it downward.
Multiplier
Tooltip: Bollinger Bands Multiplier.
Purpose: The 'Multiplier' setting determines how wide the Bollinger Bands are around the mean. A higher value will result in a wider heatmap, capturing more extreme price movements. A lower value will tighten the heatmap around the mean price.
Values: The minimum is 0, and you can increase this in steps of 0.2.
Length
Tooltip: Length of Simple Moving Average (SMA).
Purpose: This setting specifies the period for the Simple Moving Average that serves as the basis for the Bollinger Bands. A higher value will produce a smoother average, while a lower value will make it more responsive to price changes.
Values: Can be set to any integer value.
Heat Map Alpha
Tooltip: Opacity level of the heatmap.
Purpose: This controls the transparency of the heatmap. A lower value will make the heatmap more transparent, allowing you to see the price action more clearly. A higher value will make the heatmap more opaque, emphasizing the bands.
Values: Ranges from 0 (completely transparent) to 100 (completely opaque).
Color Settings
High Color & Low Color: These settings allow you to customize the gradient colors of the heatmap.
Purpose: Use contrasting colors for better visibility or colors that you prefer. The 'High Color' is used for areas with high density (high probability), while the 'Low Color' is for low-density areas (low probability).
Usage Scenarios for Settings
For Volatile Markets: Increase 'Scale ATR Length' for better smoothing and set a higher 'Multiplier' to capture wider price movements.
For Trend Following: You might want to set a larger 'Length' for the SMA and adjust 'Scale' and 'Offset' to focus on more probable price zones.
These are just recommendations; feel free to experiment with these settings to suit your specific trading requirements.
How To Interpret
The heatmap gives a visual representation of the range within which prices are likely to move. Areas with high density (brighter color) indicate a higher probability of the price being in that range, whereas areas with low density (more transparent) indicate a lower probability.
Bright Areas: Considered high-probability zones where the price is more likely to be.
Transparent Areas: Considered low-probability zones where the price is less likely to be.
Tips For Use
Trend Confirmation: Use the heatmap along with other trend indicators to confirm the strength and direction of a trend.
Volatility: Use the density and spread of the heatmap as an indication of market volatility.
Entry and Exit: High-density areas could be potential support and resistance levels, aiding in entry and exit decisions.
Caution
The Bollinger Bands Heatmap assumes a normal distribution of prices. While this is a standard assumption in statistics, it is crucial to understand that real-world price movements may not always adhere to a normal distribution.
Conclusion
The Bollinger Bands Heatmap Indicator offers traders a fresh perspective on Bollinger Bands by transforming them into a visual, real-time heatmap. With its customizable settings and visually engaging display, BBH can be a useful tool for traders looking to understand price probabilities in a dynamic way.
Feel free to explore its features and adjust the settings to suit your trading strategy. Happy trading!
Pairs Trade DataThis indicator is helpful when doing pairs trade as when a pair is charted in the style of StockA/StockB, it will display a table with data on the two stocks, mainly the price, and the individual tickers with exchanges.
SimilarityMeasuresLibrary "SimilarityMeasures"
Similarity measures are statistical methods used to quantify the distance between different data sets
or strings. There are various types of similarity measures, including those that compare:
- data points (SSD, Euclidean, Manhattan, Minkowski, Chebyshev, Correlation, Cosine, Camberra, MAE, MSE, Lorentzian, Intersection, Penrose Shape, Meehl),
- strings (Edit(Levenshtein), Lee, Hamming, Jaro),
- probability distributions (Mahalanobis, Fidelity, Bhattacharyya, Hellinger),
- sets (Kumar Hassebrook, Jaccard, Sorensen, Chi Square).
---
These measures are used in various fields such as data analysis, machine learning, and pattern recognition. They
help to compare and analyze similarities and differences between different data sets or strings, which
can be useful for making predictions, classifications, and decisions.
---
References:
en.wikipedia.org
cran.r-project.org
numerics.mathdotnet.com
github.com
github.com
github.com
Encyclopedia of Distances, doi.org
ssd(p, q)
Sum of squared difference for N dimensions.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Measure of distance that calculates the squared euclidean distance.
euclidean(p, q)
Euclidean distance for N dimensions.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Measure of distance that calculates the straight-line (or Euclidean).
manhattan(p, q)
Manhattan distance for N dimensions.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Measure of absolute differences between both points.
minkowski(p, q, p_value)
Minkowsky Distance for N dimensions.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
p_value (float) : `float` P value, default=1.0(1: manhatan, 2: euclidean), does not support chebychev.
Returns: Measure of similarity in the normed vector space.
chebyshev(p, q)
Chebyshev distance for N dimensions.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Measure of maximum absolute difference.
correlation(p, q)
Correlation distance for N dimensions.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Measure of maximum absolute difference.
cosine(p, q)
Cosine distance between provided vectors.
Parameters:
p (float ) : `array` 1D Vector.
q (float ) : `array` 1D Vector.
Returns: The Cosine distance between vectors `p` and `q`.
---
angiogenesis.dkfz.de
camberra(p, q)
Camberra distance for N dimensions.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Weighted measure of absolute differences between both points.
mae(p, q)
Mean absolute error is a normalized version of the sum of absolute difference (manhattan).
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Mean absolute error of vectors `p` and `q`.
mse(p, q)
Mean squared error is a normalized version of the sum of squared difference.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Mean squared error of vectors `p` and `q`.
lorentzian(p, q)
Lorentzian distance between provided vectors.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Lorentzian distance of vectors `p` and `q`.
---
angiogenesis.dkfz.de
intersection(p, q)
Intersection distance between provided vectors.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Intersection distance of vectors `p` and `q`.
---
angiogenesis.dkfz.de
penrose(p, q)
Penrose Shape distance between provided vectors.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Penrose shape distance of vectors `p` and `q`.
---
angiogenesis.dkfz.de
meehl(p, q)
Meehl distance between provided vectors.
Parameters:
p (float ) : `array` Vector with first numeric distribution.
q (float ) : `array` Vector with second numeric distribution.
Returns: Meehl distance of vectors `p` and `q`.
---
angiogenesis.dkfz.de
edit(x, y)
Edit (aka Levenshtein) distance for indexed strings.
Parameters:
x (int ) : `array` Indexed array.
y (int ) : `array` Indexed array.
Returns: Number of deletions, insertions, or substitutions required to transform source string into target string.
---
generated description:
The Edit distance is a measure of similarity used to compare two strings. It is defined as the minimum number of
operations (insertions, deletions, or substitutions) required to transform one string into another. The operations
are performed on the characters of the strings, and the cost of each operation depends on the specific algorithm
used.
The Edit distance is widely used in various applications such as spell checking, text similarity, and machine
translation. It can also be used for other purposes like finding the closest match between two strings or
identifying the common prefixes or suffixes between them.
---
github.com
www.red-gate.com
planetcalc.com
lee(x, y, dsize)
Distance between two indexed strings of equal length.
Parameters:
x (int ) : `array` Indexed array.
y (int ) : `array` Indexed array.
dsize (int) : `int` Dictionary size.
Returns: Distance between two strings by accounting for dictionary size.
---
www.johndcook.com
hamming(x, y)
Distance between two indexed strings of equal length.
Parameters:
x (int ) : `array` Indexed array.
y (int ) : `array` Indexed array.
Returns: Length of different components on both sequences.
---
en.wikipedia.org
jaro(x, y)
Distance between two indexed strings.
Parameters:
x (int ) : `array` Indexed array.
y (int ) : `array` Indexed array.
Returns: Measure of two strings' similarity: the higher the value, the more similar the strings are.
The score is normalized such that `0` equates to no similarities and `1` is an exact match.
---
rosettacode.org
mahalanobis(p, q, VI)
Mahalanobis distance between two vectors with population inverse covariance matrix.
Parameters:
p (float ) : `array` 1D Vector.
q (float ) : `array` 1D Vector.
VI (matrix) : `matrix` Inverse of the covariance matrix.
Returns: The mahalanobis distance between vectors `p` and `q`.
---
people.revoledu.com
stat.ethz.ch
docs.scipy.org
fidelity(p, q)
Fidelity distance between provided vectors.
Parameters:
p (float ) : `array` 1D Vector.
q (float ) : `array` 1D Vector.
Returns: The Bhattacharyya Coefficient between vectors `p` and `q`.
---
en.wikipedia.org
bhattacharyya(p, q)
Bhattacharyya distance between provided vectors.
Parameters:
p (float ) : `array` 1D Vector.
q (float ) : `array` 1D Vector.
Returns: The Bhattacharyya distance between vectors `p` and `q`.
---
en.wikipedia.org
hellinger(p, q)
Hellinger distance between provided vectors.
Parameters:
p (float ) : `array` 1D Vector.
q (float ) : `array` 1D Vector.
Returns: The hellinger distance between vectors `p` and `q`.
---
en.wikipedia.org
jamesmccaffrey.wordpress.com
kumar_hassebrook(p, q)
Kumar Hassebrook distance between provided vectors.
Parameters:
p (float ) : `array` 1D Vector.
q (float ) : `array` 1D Vector.
Returns: The Kumar Hassebrook distance between vectors `p` and `q`.
---
github.com
jaccard(p, q)
Jaccard distance between provided vectors.
Parameters:
p (float ) : `array` 1D Vector.
q (float ) : `array` 1D Vector.
Returns: The Jaccard distance between vectors `p` and `q`.
---
github.com
sorensen(p, q)
Sorensen distance between provided vectors.
Parameters:
p (float ) : `array` 1D Vector.
q (float ) : `array` 1D Vector.
Returns: The Sorensen distance between vectors `p` and `q`.
---
people.revoledu.com
chi_square(p, q, eps)
Chi Square distance between provided vectors.
Parameters:
p (float ) : `array` 1D Vector.
q (float ) : `array` 1D Vector.
eps (float)
Returns: The Chi Square distance between vectors `p` and `q`.
---
uw.pressbooks.pub
stats.stackexchange.com
www.itl.nist.gov
kulczynsky(p, q, eps)
Kulczynsky distance between provided vectors.
Parameters:
p (float ) : `array` 1D Vector.
q (float ) : `array` 1D Vector.
eps (float)
Returns: The Kulczynsky distance between vectors `p` and `q`.
---
github.com