Liquidation_linesLibrary "Liquidationline"
f_calculateLeverage(_leverage, _maintainance, _value, _direction)
Parameters:
_leverage
_maintainance
_value
_direction
f_liqline_update(_Liqui_Line, _killonlowhigh)
Parameters:
_Liqui_Line
_killonlowhigh
f_liqline_draw(_Liqui_Line, _priceorliq)
Parameters:
_Liqui_Line
_priceorliq
f_liqline_add(_Liqui_Line, linetoadd, _limit)
Parameters:
_Liqui_Line
linetoadd
_limit
Liquidationline
Fields:
creationtime
stoptime
price
leverage
maintainance
line_active
line_color
line_thickness
line_style
line_direction
line_finished
text_active
text_size
text_color
this library can draw typical liquidation lines, which can be called e.g. by indicator signals
You can see the default implementation in the lower part of the code, starting with RUNTIME
Don't forget to increase max lines to 500 in your script.
It can look like this screenshot here, with only minor changes to your executing script.
The base is the same
Statistics
Global LiquidityPlots the sum of the balance sheets of the world's major central banks - FED, ECB, BoE, PBoC, BOJ, India and Switzerland - in a currency of your choice. Defaults to USD.
Also shows FED net liquidity (balance sheet - tga - rrp) for comparison. Uses a configurable multiplier to make the two lines viewable on the same price scale.
DistributionsLibrary "Distributions"
Library with price distribution zones calculation helpers.
Based on research from "Trading Systems and Methods, 5th Edition" by Perry J. Kaufman
getZones(h, l, c, window)
Returns price distribution zones based on HLC and for some period
Parameters:
h : high price
l : low price
c : close price
window : period to calculate distributions
Returns: tuple of 5 price zones in descent order, from highest to lowest
Balance of Force Day of the Week (BOFDW)The script is a custom technical indicator for TradingView that is based on an analysis of the price movements of a financial instrument over the course of a week. The indicator uses a variety of inputs, including the open and close prices for each day of the week, to determine the "BOF" (BOF) for each day.
The BOF is calculated based on the relative magnitude of bullish and bearish price movements and is then used to determine the average BOF over a moving window of data points. This average BOF is displayed on the chart as an overlay, providing a measure of the average bullishness or bearishness of the financial instrument over the course of a week.
The indicator also allows users to specify the location of the overlay on the chart and to customize the appearance of the overlay with options for text and box colors. The script provides a number of built-in options for chart position, including the top-left, top-middle, top-right, middle-left, middle-center, middle-right, bottom-left, bottom-middle, and bottom-right corners of the chart.
Overall, this custom technical indicator is a useful tool for traders and investors who are looking to gain a deeper understanding of the price trends of a financial instrument over the course of a week. By providing a clear and concise measure of the average POF over time, the indicator can help users identify key patterns in the market and make more informed trading decisions.
PerformanceTableLibrary "PerformanceTable"
TODO: add library description here
This library was created as a library because adding a performance table to an existing strategy script made the strategy script lengthy and inconvenient to manage.
The monthly table script referenced @QuantNomad's code.
The performance table script referenced @myncrypto's code.
To use, copy and paste the code below at the bottom of the strategy script you are using, and the table for strategy performance will be displayed on a chart.
//------------Copy & Paste --------------------------------------//
import Cube_Lee/PerformanceTable/1 as PT
PT = input.bool(true, "Show Performance Table", tooltip = "전략의 성과를 우측상단에 테이블로 표시합니다.", group = "Performance Table")
MT = input.bool(true, "Show Monthly Table", tooltip = "전략의 월별 수익률을 우측하단에 테이블로 표시합니다.", group = "Performance Table")
if PT
PT.PerformanceTable()
if MT
PT.MonthlyTable()
//------------Copy & Paste---------------------------------------//
PerformanceTable()
MonthlyTable()
VIX OscillatorThis is my VIX Oscillator indicator.
About it:
This indicator takes the Z-Score of the VIX and of the current ticker you are on and presents them in the format of an oscillator.
Key parts of the indicator:
A diagram of the key elements of the indicator are displayed above.
Purple Line: Represents the Z-Score of the current Ticker.
Blue Line: Represents the Z-Score of the VIX
Green fill line: Represents bullish divergence
Red fill line: Represents bearish divergence
How to use it:
Characteristics for long entries:
- Look for recent bullish divergence (green fill line)
- Look for the ticker line (purple line) to be holding above 0 (neutrality)
- look for a bullish cross (purple line (ticker) crossing over blue line (VIX))
Characteristics for short entries:
- Look for recent Bearish divergence
- Look for the VIX line (blue line) to be holding above 0 and the Ticker
- Look for the ticker line to be holding below 0
- Look for a bearish cross (blue crossing above purple)
Some principles:
The bands represent oversold, overbought and neutral.
0 is absolute neutrality. No bias here.
Anything towards + 2.5 is considered normal, moving towards overbought (2.5 or higher).
Anything towards -2.5 is considered normal, moving towards oversold (-2.5 or lower).
+2.5 or higher is overbought.
-2.5 or lower is oversold.
As always, I have prepared a quick tutorial video for your reference of this indicator:
Please let me know your questions, comments or suggestions about this indicator below.
Thank you for checking it out!
Delta Ladder [Kioseff Trading]Hello!
This script presents volume delta data in various forms!
Features
Classic mode: Volume delta boxes oriented to the right of the bar (sell closer / buy further)
On Bar mode: Volume delta boxes oriented on the bar (sell left / buy right)
Pure Ladder mode: Pure volume delta ladder
PoC highlighting
Color-coordinated delta boxes. Marginal volume differences are substantially shaded while large volume differences are lightly shaded.
Volume delta boxes can be merged and delta values removed to generate a color-only canvas reflecting vol. delta differences in price blocks.
Price bars can be split up to 497 times - allowing for greater precision.
Total volume delta for the bar and timestamp included
The image above shows Classic mode - delta blocks are oriented left/right contingent on positive/negative values!
The image above shows the same price sequence; however, delta blocks are superimposed on the price bar. Left-side blocks reflect negative delta while right-side blocks reflect positive delta! To apply this display method - select "On Bar" for the "Data Display Method" setting!
The image above shows "Pure Ladder" mode. Delta blocks remain color-coordinated; however, all delta blocks retain the same x-axis as the price bar they were calculated for!
Additionally, you can select to remove the delta values and merge the delta boxes to generate a color-based canvas indicative of volume delta at traded price levels!
The image above shows the same price sequence; however, the "Volume Assumption" setting is activated.
When active, the indicator assumes a 60/ 40 split when a level is traded at and only one metric - "buy volume" or "sell volume" is recorded. This means there shouldn't be any levels recorded where "buy volume" is greater than 0 and "sell volume" equals 0 and vice versa. While this assumption was performed arbitrarily, it may help better replicate volume delta and OI delta calculations seen on other charting platforms.
This option is configurable; you can select to have the script not assume a 60/ 40 split and instead record volume "as is" at the corresponding price level!
I plan to roll out additional features for the indicator - particularly tick-based price blocks! Stay tuned (:
Thank you!
Reinforced RSI - The Quant Science This strategy was designed and written with the goal of showing and motivating the community how to integrate our 'Probabilities' module with their own script.
We have recreated one of the simplest strategies used by many traders. The strategy only trades long and uses the overbought and oversold levels on the RSI indicator.
We added stop losses and take profits to offer more dynamism to the strategy. Then the 'Probabilities' module was integrated to create a probabilistic reinforcement on each trade.
Specifically, each trade is executed, only if the past probabilities of making a profitable trade is greater than or equal to 51%. This greatly increased the performance of the strategy by avoiding possible bad trades.
The backtesting was calculated on the NASDAQ:TSLA , on 15 minutes timeframe.
The strategy works on Tesla using the following parameters:
1. Lenght: 13
2. Oversold: 40
3. Overbought: 70
4. Lookback: 50
5. Take profit: 3%
6. Stop loss: 3%
Time period: January 2021 to date.
Our Probabilities Module, used in the strategy example:
Machine Learning: Lorentzian Classification█ OVERVIEW
A Lorentzian Distance Classifier (LDC) is a Machine Learning classification algorithm capable of categorizing historical data from a multi-dimensional feature space. This indicator demonstrates how Lorentzian Classification can also be used to predict the direction of future price movements when used as the distance metric for a novel implementation of an Approximate Nearest Neighbors (ANN) algorithm.
█ BACKGROUND
In physics, Lorentzian space is perhaps best known for its role in describing the curvature of space-time in Einstein's theory of General Relativity (2). Interestingly, however, this abstract concept from theoretical physics also has tangible real-world applications in trading.
Recently, it was hypothesized that Lorentzian space was also well-suited for analyzing time-series data (4), (5). This hypothesis has been supported by several empirical studies that demonstrate that Lorentzian distance is more robust to outliers and noise than the more commonly used Euclidean distance (1), (3), (6). Furthermore, Lorentzian distance was also shown to outperform dozens of other highly regarded distance metrics, including Manhattan distance, Bhattacharyya similarity, and Cosine similarity (1), (3). Outside of Dynamic Time Warping based approaches, which are unfortunately too computationally intensive for PineScript at this time, the Lorentzian Distance metric consistently scores the highest mean accuracy over a wide variety of time series data sets (1).
Euclidean distance is commonly used as the default distance metric for NN-based search algorithms, but it may not always be the best choice when dealing with financial market data. This is because financial market data can be significantly impacted by proximity to major world events such as FOMC Meetings and Black Swan events. This event-based distortion of market data can be framed as similar to the gravitational warping caused by a massive object on the space-time continuum. For financial markets, the analogous continuum that experiences warping can be referred to as "price-time".
Below is a side-by-side comparison of how neighborhoods of similar historical points appear in three-dimensional Euclidean Space and Lorentzian Space:
This figure demonstrates how Lorentzian space can better accommodate the warping of price-time since the Lorentzian distance function compresses the Euclidean neighborhood in such a way that the new neighborhood distribution in Lorentzian space tends to cluster around each of the major feature axes in addition to the origin itself. This means that, even though some nearest neighbors will be the same regardless of the distance metric used, Lorentzian space will also allow for the consideration of historical points that would otherwise never be considered with a Euclidean distance metric.
Intuitively, the advantage inherent in the Lorentzian distance metric makes sense. For example, it is logical that the price action that occurs in the hours after Chairman Powell finishes delivering a speech would resemble at least some of the previous times when he finished delivering a speech. This may be true regardless of other factors, such as whether or not the market was overbought or oversold at the time or if the macro conditions were more bullish or bearish overall. These historical reference points are extremely valuable for predictive models, yet the Euclidean distance metric would miss these neighbors entirely, often in favor of irrelevant data points from the day before the event. By using Lorentzian distance as a metric, the ML model is instead able to consider the warping of price-time caused by the event and, ultimately, transcend the temporal bias imposed on it by the time series.
For more information on the implementation details of the Approximate Nearest Neighbors (ANN) algorithm used in this indicator, please refer to the detailed comments in the source code.
█ HOW TO USE
Below is an explanatory breakdown of the different parts of this indicator as it appears in the interface:
Below is an explanation of the different settings for this indicator:
General Settings:
Source - This has a default value of "hlc3" and is used to control the input data source.
Neighbors Count - This has a default value of 8, a minimum value of 1, a maximum value of 100, and a step of 1. It is used to control the number of neighbors to consider.
Max Bars Back - This has a default value of 2000.
Feature Count - This has a default value of 5, a minimum value of 2, and a maximum value of 5. It controls the number of features to use for ML predictions.
Color Compression - This has a default value of 1, a minimum value of 1, and a maximum value of 10. It is used to control the compression factor for adjusting the intensity of the color scale.
Show Exits - This has a default value of false. It controls whether to show the exit threshold on the chart.
Use Dynamic Exits - This has a default value of false. It is used to control whether to attempt to let profits ride by dynamically adjusting the exit threshold based on kernel regression.
Feature Engineering Settings:
Note: The Feature Engineering section is for fine-tuning the features used for ML predictions. The default values are optimized for the 4H to 12H timeframes for most charts, but they should also work reasonably well for other timeframes. By default, the model can support features that accept two parameters (Parameter A and Parameter B, respectively). Even though there are only 4 features provided by default, the same feature with different settings counts as two separate features. If the feature only accepts one parameter, then the second parameter will default to EMA-based smoothing with a default value of 1. These features represent the most effective combination I have encountered in my testing, but additional features may be added as additional options in the future.
Feature 1 - This has a default value of "RSI" and options are: "RSI", "WT", "CCI", "ADX".
Feature 2 - This has a default value of "WT" and options are: "RSI", "WT", "CCI", "ADX".
Feature 3 - This has a default value of "CCI" and options are: "RSI", "WT", "CCI", "ADX".
Feature 4 - This has a default value of "ADX" and options are: "RSI", "WT", "CCI", "ADX".
Feature 5 - This has a default value of "RSI" and options are: "RSI", "WT", "CCI", "ADX".
Filters Settings:
Use Volatility Filter - This has a default value of true. It is used to control whether to use the volatility filter.
Use Regime Filter - This has a default value of true. It is used to control whether to use the trend detection filter.
Use ADX Filter - This has a default value of false. It is used to control whether to use the ADX filter.
Regime Threshold - This has a default value of -0.1, a minimum value of -10, a maximum value of 10, and a step of 0.1. It is used to control the Regime Detection filter for detecting Trending/Ranging markets.
ADX Threshold - This has a default value of 20, a minimum value of 0, a maximum value of 100, and a step of 1. It is used to control the threshold for detecting Trending/Ranging markets.
Kernel Regression Settings:
Trade with Kernel - This has a default value of true. It is used to control whether to trade with the kernel.
Show Kernel Estimate - This has a default value of true. It is used to control whether to show the kernel estimate.
Lookback Window - This has a default value of 8 and a minimum value of 3. It is used to control the number of bars used for the estimation. Recommended range: 3-50
Relative Weighting - This has a default value of 8 and a step size of 0.25. It is used to control the relative weighting of time frames. Recommended range: 0.25-25
Start Regression at Bar - This has a default value of 25. It is used to control the bar index on which to start regression. Recommended range: 0-25
Display Settings:
Show Bar Colors - This has a default value of true. It is used to control whether to show the bar colors.
Show Bar Prediction Values - This has a default value of true. It controls whether to show the ML model's evaluation of each bar as an integer.
Use ATR Offset - This has a default value of false. It controls whether to use the ATR offset instead of the bar prediction offset.
Bar Prediction Offset - This has a default value of 0 and a minimum value of 0. It is used to control the offset of the bar predictions as a percentage from the bar high or close.
Backtesting Settings:
Show Backtest Results - This has a default value of true. It is used to control whether to display the win rate of the given configuration.
█ WORKS CITED
(1) R. Giusti and G. E. A. P. A. Batista, "An Empirical Comparison of Dissimilarity Measures for Time Series Classification," 2013 Brazilian Conference on Intelligent Systems, Oct. 2013, DOI: 10.1109/bracis.2013.22.
(2) Y. Kerimbekov, H. Ş. Bilge, and H. H. Uğurlu, "The use of Lorentzian distance metric in classification problems," Pattern Recognition Letters, vol. 84, 170–176, Dec. 2016, DOI: 10.1016/j.patrec.2016.09.006.
(3) A. Bagnall, A. Bostrom, J. Large, and J. Lines, "The Great Time Series Classification Bake Off: An Experimental Evaluation of Recently Proposed Algorithms." ResearchGate, Feb. 04, 2016.
(4) H. Ş. Bilge, Yerzhan Kerimbekov, and Hasan Hüseyin Uğurlu, "A new classification method by using Lorentzian distance metric," ResearchGate, Sep. 02, 2015.
(5) Y. Kerimbekov and H. Şakir Bilge, "Lorentzian Distance Classifier for Multiple Features," Proceedings of the 6th International Conference on Pattern Recognition Applications and Methods, 2017, DOI: 10.5220/0006197004930501.
(6) V. Surya Prasath et al., "Effects of Distance Measure Choice on KNN Classifier Performance - A Review." .
█ ACKNOWLEDGEMENTS
@veryfid - For many invaluable insights, discussions, and advice that helped to shape this project.
@capissimo - For open sourcing his interesting ideas regarding various KNN implementations in PineScript, several of which helped inspire my original undertaking of this project.
@RikkiTavi - For many invaluable physics-related conversations and for his helping me develop a mechanism for visualizing various distance algorithms in 3D using JavaScript
@jlaurel - For invaluable literature recommendations that helped me to understand the underlying subject matter of this project.
@annutara - For help in beta-testing this indicator and for sharing many helpful ideas and insights early on in its development.
@jasontaylor7 - For helping to beta-test this indicator and for many helpful conversations that helped to shape my backtesting workflow
@meddymarkusvanhala - For helping to beta-test this indicator
@dlbnext - For incredibly detailed backtesting testing of this indicator and for sharing numerous ideas on how the user experience could be improved.
MLExtensionsLibrary "MLExtensions"
normalizeDeriv(src, quadraticMeanLength)
Returns the smoothed hyperbolic tangent of the input series.
Parameters:
src : The input series (i.e., the first-order derivative for price).
quadraticMeanLength : The length of the quadratic mean (RMS).
Returns: nDeriv The normalized derivative of the input series.
normalize(src, min, max)
Rescales a source value with an unbounded range to a target range.
Parameters:
src : The input series
min : The minimum value of the unbounded range
max : The maximum value of the unbounded range
Returns: The normalized series
rescale(src, oldMin, oldMax, newMin, newMax)
Rescales a source value with a bounded range to anther bounded range
Parameters:
src : The input series
oldMin : The minimum value of the range to rescale from
oldMax : The maximum value of the range to rescale from
newMin : The minimum value of the range to rescale to
newMax : The maximum value of the range to rescale to
Returns: The rescaled series
color_green(prediction)
Assigns varying shades of the color green based on the KNN classification
Parameters:
prediction : Value (int|float) of the prediction
Returns: color
color_red(prediction)
Assigns varying shades of the color red based on the KNN classification
Parameters:
prediction : Value of the prediction
Returns: color
tanh(src)
Returns the the hyperbolic tangent of the input series. The sigmoid-like hyperbolic tangent function is used to compress the input to a value between -1 and 1.
Parameters:
src : The input series (i.e., the normalized derivative).
Returns: tanh The hyperbolic tangent of the input series.
dualPoleFilter(src, lookback)
Returns the smoothed hyperbolic tangent of the input series.
Parameters:
src : The input series (i.e., the hyperbolic tangent).
lookback : The lookback window for the smoothing.
Returns: filter The smoothed hyperbolic tangent of the input series.
tanhTransform(src, smoothingFrequency, quadraticMeanLength)
Returns the tanh transform of the input series.
Parameters:
src : The input series (i.e., the result of the tanh calculation).
smoothingFrequency
quadraticMeanLength
Returns: signal The smoothed hyperbolic tangent transform of the input series.
n_rsi(src, n1, n2)
Returns the normalized RSI ideal for use in ML algorithms.
Parameters:
src : The input series (i.e., the result of the RSI calculation).
n1 : The length of the RSI.
n2 : The smoothing length of the RSI.
Returns: signal The normalized RSI.
n_cci(src, n1, n2)
Returns the normalized CCI ideal for use in ML algorithms.
Parameters:
src : The input series (i.e., the result of the CCI calculation).
n1 : The length of the CCI.
n2 : The smoothing length of the CCI.
Returns: signal The normalized CCI.
n_wt(src, n1, n2)
Returns the normalized WaveTrend Classic series ideal for use in ML algorithms.
Parameters:
src : The input series (i.e., the result of the WaveTrend Classic calculation).
n1
n2
Returns: signal The normalized WaveTrend Classic series.
n_adx(highSrc, lowSrc, closeSrc, n1)
Returns the normalized ADX ideal for use in ML algorithms.
Parameters:
highSrc : The input series for the high price.
lowSrc : The input series for the low price.
closeSrc : The input series for the close price.
n1 : The length of the ADX.
regime_filter(src, threshold, useRegimeFilter)
Parameters:
src
threshold
useRegimeFilter
filter_adx(src, length, adxThreshold, useAdxFilter)
filter_adx
Parameters:
src : The source series.
length : The length of the ADX.
adxThreshold : The ADX threshold.
useAdxFilter : Whether to use the ADX filter.
Returns: The ADX.
filter_volatility(minLength, maxLength, useVolatilityFilter)
filter_volatility
Parameters:
minLength : The minimum length of the ATR.
maxLength : The maximum length of the ATR.
useVolatilityFilter : Whether to use the volatility filter.
Returns: Boolean indicating whether or not to let the signal pass through the filter.
backtest(high, low, open, startLongTrade, endLongTrade, startShortTrade, endShortTrade, isStopLossHit, maxBarsBackIndex, thisBarIndex)
Performs a basic backtest using the specified parameters and conditions.
Parameters:
high : The input series for the high price.
low : The input series for the low price.
open : The input series for the open price.
startLongTrade : The series of conditions that indicate the start of a long trade.`
endLongTrade : The series of conditions that indicate the end of a long trade.
startShortTrade : The series of conditions that indicate the start of a short trade.
endShortTrade : The series of conditions that indicate the end of a short trade.
isStopLossHit : The stop loss hit indicator.
maxBarsBackIndex : The maximum number of bars to go back in the backtest.
thisBarIndex : The current bar index.
Returns: A tuple containing backtest values
init_table()
init_table()
Returns: tbl The backtest results.
update_table(tbl, tradeStatsHeader, totalTrades, totalWins, totalLosses, winLossRatio, winrate, stopLosses)
update_table(tbl, tradeStats)
Parameters:
tbl : The backtest results table.
tradeStatsHeader : The trade stats header.
totalTrades : The total number of trades.
totalWins : The total number of wins.
totalLosses : The total number of losses.
winLossRatio : The win loss ratio.
winrate : The winrate.
stopLosses : The total number of stop losses.
Returns: Updated backtest results table.
USD Liquidity IndexThis USD Liquidity Index composed of 2 parts, total assets and major liabilities of the Federal Reserve .
There is a certain positive correlation between USD liquidity and risk asset price changes in history.
Suggested that USD Liquidity is mostly determined by the Federal Reserve balance (without leveraged), this index deducts three major liabilities from the total assets (in green color line) of the Federal Reserve . They are the currency in circulation (WCURCIR) in gold color, the Treasury General Account (WTREGEN) in blue color, the Reverse Repo (RRPONTSYD) in red color.
The grey line is the calculation result of the USD Liquidity Index. With it goes up, liquidity increases, vice versa.
GAVAD - Selling after a Strong MovimentThis strategy search for a moment whe the market make two candles are consistently strong, and open a Sell, searching the imediactly correction, on the new candle. It`s easy to see the bars on the histogram graph. Purple Bars represent the candle variation. when on candle cross ove the Signal line the graph plot an Yellow ci, if the second bar crossover the signal a green circle is ploted and the operation start on start of the next candle.
This strategy can be used in a lot of Stocks and other graphs. many times we need a small time of graph, maybe 1 or 5 minutes because the gain shoud be planned to a midle of the second candle. You need look the stocks you will use.
Stocks > 100 dolars isnt great, markets extremly volatly not too. but, Stocks that have a consistently development are very interisting. Look to markets searching maybe 0.5% or 1%.
For this moment, I make the development of a Brasilian Real x American Dollar. In 15 Minutes.
if you use in small timeframe the results can be better.
On this time we make more than 500 trades with a small lot of contracts, without a big percent profitable, but a small profit in each operation, maybe you search more than. To present a real trading system I insert a spreed to present a correct view of the results.
Each stock, Index, or crypto there is a specific configuration?
my suggestion for new stocks
You need choice a stock and using the setup search set over than 70% gain (percent profitable), using a 1% of gain and loss between 1-2%
as the exemple (WDO)
default I prepare a Brazilian Index
6-signal (6% is variation of a candle of the last candle)
10000- multiplicator (its important to configure diferences betwen a stock and an Indice)
gain 3 (this proportion will be set looking you target, how I say, 1% can be good)
loss 8 (this proportion will be set with you bankroll management, how I say, maybe 2%, you need evaluate)
for maximize operations I use in the 1 or 5 minute graph. Timeframes more large make slowlly results,
(but not unable that you use in a 1 hour or a 1 day.)
I make this script by zero. Maybe the code doesnt so organized, but is very easy to understand. If you have any doubts . leave a comment.
I hope help you.
occ3aka weighted fair price
The ultimate price source for all your stuff, unless you go completely nuts.
The ultimate way to build line charts & do pattern trading, unless you go completely nuts.
Why occ3?
You need a one-point estimate for every bar, a typical price of every bar aye? But then you see that every bar has a different distribution of prices. You can drop a stat test on every bar and pick median, mean, or whatever. But that's still prone to error (imagine borderline cases).
Instead, you can transform the task into a geometric one and say, "I wanna find the center of mass of all dem ticks within a particular interval (a day, a week, a century)". But lol ofc you won't do it, so lets's estimate it:
1) a straight line from Open to Close more/less estimates a regression line if you woulda dropped regression on all the ticks within a given interval;
2) centroid always lies on regression line, so it's always in between the endpoints of regression line. So that's why (open + close) /2;
3) Then, you remember that sequence matters, + generally the volume is higher near the close, so...;
4) Voila, (open + close + close) / 3
Why "fair" price?
Take a daily bar:
1) High & low were the best prices to sell & buy;
2) Opening & closing auctions had acceptable prices, in exchange for the the biggest potential to transact serious volume;
3) "Fair" price, logically, is somewhere in between the acceptable prices;
4) Market is fractal => the same principles propagate everywhere;
4) No, POCs and VPOCs don't make much sense as fair prices.
Nothing else to say, really advise to use it as a line chart if you trade price patterns.
Hulk Grid Algorithm - The Quant ScienceGrid-based intraday algorithm that works 50% in trend following and 50% in swing trading. Orders are executed on a grid of 10 levels. The grid levels are dynamic and calculated on the difference between the previous day's open and close. The algorithm makes only long trades based on the following logic:
1. The daily close of the previous day is analyzed, the first condition is met if the previous day was bullish, closing higher than the 'opening.
2. Must pass 'x' number of bars before placing market orders.
3. The range, as the difference between close and open of the previous day must be greater than 'x'.
If these three conditions are met then the algorithm will proceed to place long orders. On a total of 10 grid levels, up to five trades are executed per day.
If the current close is above level 1 of the grid (previous day's close) then trend following trading will take place, working on the upper 5 levels. In this case each order is placed starting at level 1 and closed at each level above.
If the current close is below level 1 of the grid (previous day's open) then swing trading will be carried out, working on the lower 5 levels. In this case each order is placed starting at level 2 and closed at the upper level.
If at the time of order execution the price is above or below the stop loss and take profit levels, the algorithm will cancel the orders and prevent trading.
All orders are closed exclusively for two reasons:
1. If the stop loss or take profit level is confirmed.
2. If the daily session is ended.
UI Interface
You can adjust:
1. Backtesting period
2. 'x' number of bars before placing orders at the market (remember to always add 2 to the number you enter in the user interface if you enter 2 then execution will occur at the market opening after the fourth bar).
3. Intercepted price range between close and open of the previous day, avoiding trading on days when the range is too low.
4. Stop loss, level calculated from the 'last lower grid, if the market breaks this level the grid is destroyed and closes all open positions.
5. Take profit, the level calculated from the last upper grid, if the market breaks this level the grid is destroyed and closes all open positions.
The backtesting you see in the example was generated on:
BINANCE:BTCUSDT
Timeframe 15 min
Stop loss 2%
Take profit 2%
Minimum bars 3
Size grid range 500
This algorithm can be used only on intraday timeframe.
Forex Strength IndicatorThis indicator will display the strength of 8 currencies, EUR, AUD, NZD, JPY, USD, GBP, CHF, and CAD. Each line will represent each currency. Alongside that, Fibonacci levels will be plotted based on a standard deviation from linear regression, with customizable lengths.
For more steady Fibonacci levels, use higher lengths for both Standard Deviations and Linear Regression. All currency lines come from moving averages with options like EMA, SMA, WMA, RMA, HMA, SWMA, and Linear Regression.
When lines of the active pair are far from each other, it means higher divergence in those currency strengths among the other pairs. The closer the lines are, the lower the divergence.
You can use the Fibonacci levels as points for the reversal or end of the current trend. When the lines cross can be used as a parameter for a more accurate signal of the next movement.
All 28 pairs are loaded from the same time frame and will use the same moving average for all of them
Alerts from the line crossing are available.
DataCorrelationLibrary "DataCorrelation"
Implementation of functions related to data correlation calculations. Formulas have been transformed in such a way that we avoid running loops and instead make use of time series to gradually build the data we need to perform calculation. This allows the calculations to run on unbound series, and/or higher number of samples
🎲 Simplifying Covariance
Original Formula
//For Sample
Covₓᵧ = ∑ ((xᵢ-x̄)(yᵢ-ȳ)) / (n-1)
//For Population
Covₓᵧ = ∑ ((xᵢ-x̄)(yᵢ-ȳ)) / n
Now, if we look at numerator, this can be simplified as follows
∑ ((xᵢ-x̄)(yᵢ-ȳ))
=> (x₁-x̄)(y₁-ȳ) + (x₂-x̄)(y₂-ȳ) + (x₃-x̄)(y₃-ȳ) ... + (xₙ-x̄)(yₙ-ȳ)
=> (x₁y₁ + x̄ȳ - x₁ȳ - y₁x̄) + (x₂y₂ + x̄ȳ - x₂ȳ - y₂x̄) + (x₃y₃ + x̄ȳ - x₃ȳ - y₃x̄) ... + (xₙyₙ + x̄ȳ - xₙȳ - yₙx̄)
=> (x₁y₁ + x₂y₂ + x₃y₃ ... + xₙyₙ) + (x̄ȳ + x̄ȳ + x̄ȳ ... + x̄ȳ) - (x₁ȳ + x₂ȳ + x₃ȳ ... xₙȳ) - (y₁x̄ + y₂x̄ + y₃x̄ + yₙx̄)
=> ∑xᵢyᵢ + n(x̄ȳ) - ȳ∑xᵢ - x̄∑yᵢ
So, overall formula can be simplified to be used in pine as
//For Sample
Covₓᵧ = (∑xᵢyᵢ + n(x̄ȳ) - ȳ∑xᵢ - x̄∑yᵢ) / (n-1)
//For Population
Covₓᵧ = (∑xᵢyᵢ + n(x̄ȳ) - ȳ∑xᵢ - x̄∑yᵢ) / n
🎲 Simplifying Standard Deviation
Original Formula
//For Sample
σ = √(∑(xᵢ-x̄)² / (n-1))
//For Population
σ = √(∑(xᵢ-x̄)² / n)
Now, if we look at numerator within square root
∑(xᵢ-x̄)²
=> (x₁² + x̄² - 2x₁x̄) + (x₂² + x̄² - 2x₂x̄) + (x₃² + x̄² - 2x₃x̄) ... + (xₙ² + x̄² - 2xₙx̄)
=> (x₁² + x₂² + x₃² ... + xₙ²) + (x̄² + x̄² + x̄² ... + x̄²) - (2x₁x̄ + 2x₂x̄ + 2x₃x̄ ... + 2xₙx̄)
=> ∑xᵢ² + nx̄² - 2x̄∑xᵢ
=> ∑xᵢ² + x̄(nx̄ - 2∑xᵢ)
So, overall formula can be simplified to be used in pine as
//For Sample
σ = √(∑xᵢ² + x̄(nx̄ - 2∑xᵢ) / (n-1))
//For Population
σ = √(∑xᵢ² + x̄(nx̄ - 2∑xᵢ) / n)
🎲 Using BinaryInsertionSort library
Chatterjee Correlation and Spearman Correlation functions make use of BinaryInsertionSort library to speed up sorting. The library in turn implements mechanism to insert values into sorted order so that load on sorting is reduced by higher extent allowing the functions to work on higher sample size.
🎲 Function Documentation
chatterjeeCorrelation(x, y, sampleSize, plotSize)
Calculates chatterjee correlation between two series. Formula is - ξnₓᵧ = 1 - (3 * ∑ |rᵢ₊₁ - rᵢ|)/ (n²-1)
Parameters:
x : First series for which correlation need to be calculated
y : Second series for which correlation need to be calculated
sampleSize : number of samples to be considered for calculattion of correlation. Default is 20000
plotSize : How many historical values need to be plotted on chart.
Returns: float correlation - Chatterjee correlation value if falls within plotSize, else returns na
spearmanCorrelation(x, y, sampleSize, plotSize)
Calculates spearman correlation between two series. Formula is - ρ = 1 - (6∑dᵢ²/n(n²-1))
Parameters:
x : First series for which correlation need to be calculated
y : Second series for which correlation need to be calculated
sampleSize : number of samples to be considered for calculattion of correlation. Default is 20000
plotSize : How many historical values need to be plotted on chart.
Returns: float correlation - Spearman correlation value if falls within plotSize, else returns na
covariance(x, y, include, biased)
Calculates covariance between two series of unbound length. Formula is Covₓᵧ = ∑ ((xᵢ-x̄)(yᵢ-ȳ)) / (n-1) for sample and Covₓᵧ = ∑ ((xᵢ-x̄)(yᵢ-ȳ)) / n for population
Parameters:
x : First series for which covariance need to be calculated
y : Second series for which covariance need to be calculated
include : boolean flag used for selectively including sample
biased : boolean flag representing population covariance instead of sample covariance
Returns: float covariance - covariance of selective samples of two series x, y
stddev(x, include, biased)
Calculates Standard Deviation of a series. Formula is σ = √( ∑(xᵢ-x̄)² / n ) for sample and σ = √( ∑(xᵢ-x̄)² / (n-1) ) for population
Parameters:
x : Series for which Standard Deviation need to be calculated
include : boolean flag used for selectively including sample
biased : boolean flag representing population covariance instead of sample covariance
Returns: float stddev - standard deviation of selective samples of series x
correlation(x, y, include)
Calculates pearson correlation between two series of unbound length. Formula is r = Covₓᵧ / σₓσᵧ
Parameters:
x : First series for which correlation need to be calculated
y : Second series for which correlation need to be calculated
include : boolean flag used for selectively including sample
Returns: float correlation - correlation between selective samples of two series x, y
JeeSauceScriptsLibrary "JeeSauceScripts"
getupdnvol()
GetTotalUpVolume(upvolume)
Parameters:
upvolume
GetTotalDnVolume(downvolume)
Parameters:
downvolume
GetDelta(totalupvolume, totaldownvolume)
Parameters:
totalupvolume
totaldownvolume
GetMaxUpVolume(upvolume)
Parameters:
upvolume
GetMaxDnVolume(downvolume)
Parameters:
downvolume
Getcvd()
Getcvdopen(cvd)
Parameters:
cvd
Getcvdhigh(cvd, maxvolumeup)
Parameters:
cvd
maxvolumeup
Getcvdlow(cvd, maxvolumedown)
Parameters:
cvd
maxvolumedown
Getcvdclose(cvd, delta)
Parameters:
cvd
delta
CombineData(data1, data2, data3, data4, data5, data6)
Parameters:
data1
data2
data3
data4
data5
data6
FindData(data, find)
Parameters:
data
find
Chatterjee CorrelationThis is my first attempt on implementing a statistical method. This problem was given to me by @lejmer (who also helped me later on building more efficient code to achieve this) when we were debating on the need for higher resource allocation to run scripts so it can run longer and faster. The major problem faced by those who want to implement statistics based methods is that they run out of processing time or need to limit the data samples. My point was that such things need be implemented with an algorithm which suits pine instead of trying to port a python code directly. And yes, I am able to demonstrate that by using this implementation of Chatterjee Correlation.
🎲 What is Chatterjee Correlation?
The Chatterjee rank Correlation Coefficient (CCC) is a method developed by Sourav Chatterjee which can be used to study non linear correlation between two series.
Full documentation on the method can be found here:
arxiv.org
In short, the formula which we are implementing here is:
Algorithm can be simplified as follows:
1. Get the ranks of X
2. Get the ranks of Y
3. Sort ranks of Y in the order of X (Lets call this SortedYIndices)
4. Calculate the sum of adjacent Y ranks in SortedYIndices (Lets call it as SumOfAdjacentSortedIndices)
5. And finally the correlation coefficient can be calculated by using simple formula
CCC = 1 - (3*SumOfAdjacentSortedIndices)/(n^2 - 1)
🎲 Looks simple? What is the catch?
Mistake many people do here is that they think in Python/Java/C etc while coding in Pine. This makes code less efficient if it involves arrays and loops. And the simple code may look something like this.
var xArray = array.new()
var yArray = array.new()
array.push(xArray, x)
array.push(yArray, y)
sortX = array.sort_indices(xArray)
sortY = array.sort_indices(yArray)
SumOfAdjacentSortedIndices = 0.0
index = array.get(xSortIndices, 0)
for i=1 to n > 1? n -1 : na
indexNext = array.get(sortX, i)
SumOfAdjacentSortedIndices += math.abs(array.get(sortY, indexNext)-array.get(sortY, index))
index := indexNext
correlation := 1 - 3*SumOfAdjacentSortedIndices/(math.pow(n,2)-1)
But, problem here is the number of loops run. Remember pine executes the code on every bar. There are loops run in array.sort_indices and another loop we are running to calculate SumOfAdjacentSortedIndices. Due to this, chances of program throwing runtime errors due to script running for too long are pretty high. This limits greatly the number of samples against which we can run the study. The options to overcome are
Limit the sample size and calculate only between certain bars - this is not ideal as smaller sets are more likely to yield false or inconsistent results.
Start thinking in pine instead of python and code in such a way that it is optimised for pine. - This is exactly what we have done in the published code.
🎲 How to think in Pine?
In order to think in pine, you should try to eliminate the loops as much as possible. Specially on the data which is continuously growing.
My first thought was that sorting takes lots of time and need to find a better way to sort series - specially when it is a growing data set. Hence, I came up with this library which implements Binary Insertion Sort.
Replacing array.sort_indices with binary insertion sort will greatly reduce the number of loops run on each bar. In binary insertion sort, the array will remain sorted and any item we add, it will keep adding it in the existing sort order so that there is no need to run separate sort. This allows us to work with bigger data sets and can utilise full 20,000 bars for calculation instead of few 100s.
However, last loop where we calculate SumOfAdjacentSortedIndices is not replaceable easily. Hence, we only limit these iterations to certain bars (Even though we use complete sample size). Plots are made for only those bars where the results need to be printed.
🎲 Implementation
Current implementation is limited to few combinations of x and fixed y. But, will be converting this into library soon - which means, programmers can plug any x and y and get the correlation.
Our X here can be
Average volume
ATR
And our Y is distance of price from moving average - which identifies trend.
Thus, the indicator here helps to understand the correlation coefficient between volume and trend OR volatility and trend for given ticker and timeframe. Value closer to 1 means highly correlated and value closer to 0 means least correlated. Please note that this method will not tell how these values are correlated. That is, we will not be able to know if higher volume leads to higher trend or lower trend. But, we can say whether volume impacts trend or not.
Please note that values can differ by great extent for different timeframes. For example, if you look at 1D timeframe, you may get higher value of correlation coefficient whereas lower value for 1m timeframe. This means, volume to trend correlation is higher in 1D timeframe and lower in lower timeframes.
Replica of TradingView's Backtesting Engine with ArraysHello everyone,
Here is a perfectly replicated TradingView backtesting engine condensed into a single library function calculated with arrays. It includes TradingView's calculations for Net profit, Total Trades, Percent of Trades Profitable, Profit Factor, Max Drawdown (absolute and percent), and Average Trade (absolute and percent). Here's how TradingView defines each aspect of its backtesting system:
Net Profit: The overall profit or loss achieved.
Total Trades: The total number of closed trades, winning and losing.
Percent Profitable: The percentage of winning trades, the number of winning trades divided by the total number of closed trades.
Profit Factor: The amount of money the strategy made for every unit of money it lost, gross profits divided by gross losses.
Max Drawdown: The greatest loss drawdown, i.e., the greatest possible loss the strategy had compared to its highest profits.
Average Trade: The sum of money gained or lost by the average trade, Net Profit divided by the overall number of closed trades.
Here's how each variable is defined in the library function:
_backtest(bool _enter, bool _exit, float _startQty, float _tradeQty)
bool _enter: When the strategy should enter a trade (entry condition)
bool _exit: When the strategy should exit a trade (exit condition)
float _startQty: The starting capital in the account (for BTCUSD, it is the amount of USD the account starts with)
float _tradeQty: The amount of capital traded (if set to 1000 on BTCUSD, it will trade 1000 USD on each trade)
Currently, this library only works with long strategies, and I've included a commented out section under DEMO STRATEGY where you can replicate my results with TradingView's backtesting engine. There's tons I could do with this beyond what is shown, but this was a project I worked on back in June of 2022 before getting burned out. Feel free to comment with any suggestions or bugs, and I'll try to add or fix them all soon. Here's my list of thing to add to the library currently (may not all be added):
Add commission calculations.
Add support for shorting
Add a graph that resembles TradingView's overview graph.
Clean and optimize code.
Clean up in a way that makes it easy to add other TradingView calculations (such as Sharpe and Sortino ratio).
Separate all variables, so they become accessible outside of calculations (such as gross profit, gross loss, number of winning trades, number of losing trades, etc.).
Thanks for reading,
OztheWoz
Aggregated Volume Profile Spot & Futures ⚉ OVERVIEW ⚉
Aggregate Volume Profile - Shows the Volume Profile from 9 exchanges. Works on almost all CRYPTO Tickers!
You can enter your own desired exchanges, on/off any others, as well as select the sources of SPOT, FUTURES and others.
The script also includes several input parameters that allow the user to control which exchanges and currencies are included in the aggregated data.
The user can also choose how volume is displayed (in assets, U.S. dollars or euros) and how it is calculated (sum, average, median, or dispersion).
WARNING Indicator is for CRYPTO ONLY.
______________________
⚉ SETTINGS ⚉
‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾
Data Type — Choose Single or Aggregated data.
• Single — Show only current Volume.
• Aggregated — Show Aggregated Volume.
Volume By — You can also select how the volume is displayed.
• COIN — Volume in Actives.
• USD — Volume in United Stated Dollar.
• EUR — Volume in European Union.
• RUB — Volume in Russian Ruble.
Calculate By — Choose how Aggregated Volume it is calculated.
• SUM — This displays the total volume from all sources.
• AVG — This displays the average price of the volume from all sources.
• MEDIAN — This displays the median volume from all sources.
• VARIANCE — This displays the variance of the volume from all sources.
• Delta Type — Select the Volume Profile type.
• Bullish — Shows the volume of buyers.
• Bearish — Shows the volume of sellers.
• Both — Shows the total volume of buyers and sellers.
Additional features
The remaining functions are responsible for the visual part of the Volume Profile and are intuitive and I recommend that you familiarize yourself with them simply by using them.
________________
⚉ NOTES ⚉
‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾
If you have any ideas what to add to my work to add more sources or make calculations cooler, suggest in DM .
Also I recommend exploring and trying out my similar work.
Expected Move Plotter IntradayHello everyone!
I am releasing my Intra-day expected move plotter indicator.
About the indicator:
This indicator looks at 3 differing time frames, the 15, 30 and 60 minute time frames.
It calculates the average move from high to low over the past 5 candle period and then plots out the expected move based on that average.
It also attempts to determine the sentiment. How it does this is by taking the average of the High, Low and Close of the previous 5 minute candle and comparing it in relation to the close of the current 5 minute candle. It essentially is the premise of pivot points.
Each time frame can be shut off or selected based on your preference, as well as the sentiment fills.
How to use:
Please play around with it and determine how you feel you could best use it, but I can share with you some tips that I have picked up from using this.
Wait for a clear rejection of respect of a level:
Once you have confirmed rejection or support, you can scalp to the next support level:
As well, you can switch between the 30 and 60 minute time frames as reference
30 Minute:
And that's it!
Its a very simplistic indicator, but it is quite helpful to help identify potential areas of reversal.
There really isn't much to it!
Also, it can be used on any stock!
As always, I have provided a quick tutorial video for your reference, linked below:
Let me know if you have any questions or recommendations for modification to make the indicator more useful and helpful.
Thanks so much for checking it out and trying it out everyone!
As always, safe trades and green days!
Probabilities Module - The Quant Science This module can be integrate in your code strategy or indicator and will help you to calculate the percentage probability on specific event inside your strategy. The main goal is improve and simplify the workflow if you are trying to build a quantitative strategy or indicator based on statistics or reinforcement model.
Logic
The script made a simulation inside your code based on a single event. For single event mean a trading logic composed by three different objects: entry, take profit, stop loss.
The script scrape in the past through a look back function and return the positive percentage probability about the positive event inside the data sample. In this way you are able to understand and calculate how many time (in percentage term) the conditions inside the single event are positive, helping to create your statistical edge.
You can adjust the look back period in you user interface.
How can set up the module for your use case
At the top of the script you can find:
1. entry_condition : replace the default condition with your specific entry condition.
2. TPcondition_exit : replace the default condition with your specific take profit condition.
3. SLcondition_exit : replace the default condition with your specific stop loss condition.