Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Volume Profile and Cluster Search as freqtrade feature #6845

Open
fridary opened this issue May 16, 2022 · 26 comments · May be fixed by #9066
Open

Add Volume Profile and Cluster Search as freqtrade feature #6845

fridary opened this issue May 16, 2022 · 26 comments · May be fixed by #9066
Labels
Discussion Proposals which should be discussed before working on it.

Comments

@fridary
Copy link

fridary commented May 16, 2022

I am using quite important thing when trading called volume profiling. This functional I added by myself in freqtrade and want to share with other people to help them improve their profits. Some people say it's unreal to trade just technical analysis. Anyway, finding big players that manipulate markets, clustering them will open much more opportunities. It's a pity only few traders know about this trick.

Here is example of BTC 15m timeframe 1-13 May and I clustered trades:
Снимок экрана 2022-05-16 в 09 28 36

There are huge volume accumulations I pointed with arrows that shows us some people (big players/banks/insiders) preparing for big jump or exiting it. This can be done simply reading time and sales (trades print). So we can use that points to enter trade or exit.
Binance and few other exchanges allow to download historical trades: https://data.binance.vision/
Else we can download by ccxt api fetch_trades, but only few exchanges have since param to download in the past (like binance, lbank2, ftx, kraken, hitbtc, bitfinex2, bitmex).

Here is short explanation about volume profile: https://www.youtube.com/watch?v=FGAmaUCOlLM
Super major find: there is ATAS trading platform and you can try cluster search in real time just to see what it is https://atas.net/atas-possibilities/indicators/cluster-analysis-and-cluster-search-indicator/

@xmatthias I can share code with you to add this in freqtrade and to pack correct for other people. Pull requests is still pain for me, I have massive unnecessary codes.
p.s. that would be great if we create small trading club to share good information/codes/strategies among.

@xmatthias
Copy link
Member

xmatthias commented May 16, 2022

Trades-data is not really necessary to calculate a volume profile - as also shown in #5795 .

In general, i'm pretty sceptical about using raw trades data. While freqtrade has methods for it to download data from kraken - the data is huge once you have a slightly more active pair - which makes it difficult to work with without encountering timeouts.

You also can't rely on this data in production - as regular rest-endpoints will usually not provide this data "up to the minute" - but only after some delay (usually/often after a delay of several minutes) - at least according to my experimentation with this some time ago. This will obviously differ by exchange - but no exchange will guarantee uptodate information via Rest endpoints for this.

This means, adding any feature like this to freqtrade is currently not possible (while you could probably use it for plotting - there's no value if you can't use it in your strategy).
This may change once ccxt opensources CCXT.Pro (Q3 this year - iirc) - but until then, i do see this as not possible.

I can share code with you to add this in freqtrade and to pack correct for other people. Pull requests is still pain for me, I have massive unnecessary codes.

I'll not be taking a dump of "i modified all of this - please clean it up for me so it can be accepted".
If you think it should be contributed, please make a PR out of it, and clean the code up yourself.

While i can certainly assist in that with advise (also on how to do this properly using git tooling) - I'll definitely not be the one doing the majority of that work, cleaning up other people's messy code.

Thanks for your understanding.

p.s. that would be great if we create small trading club to share good information/codes/strategies among.

The freqtrade community is on this discord server. There are several channels where people discuss / share strategies / codes / ... - so i see this as "already there".

@fridary
Copy link
Author

fridary commented May 18, 2022

Trades-data is not really necessary to calculate a volume profile - as also shown in #5795 .

That's not right. Yes, you can load 1 minute candles and calculate price level at each candle by volume, but you will get noise and dirty level values. Really valuable information you get only from reading trade prints / order flow. Things like clustering, footprints, deltas, balances and many other.

Footprint cluster chart example with bid-ask distribution. There are a lot of instruments and trading strategies just by reading what is inside candle.
Снимок экрана 2022-05-18 в 08 46 05

This how I store footprints (15m BTC/USDT example):

'2022-05-01 20:00:00': {
    'high': 37847.3,
    'low': 37682.45,
    'open': 37805.06,
    'close': 37818.06,
    'volume': 185.39344356999996,
    'trades': 260,
    'footprint': {
        37682: {'ask_qty': 0,
                'ask_trades': 0,
                'bid_qty': 0.0025,
                'bid_trades': 1},
        # ...
        37743: {'ask_qty': 0.008,
                'ask_trades': 3,
                'bid_qty': 0.008,
                'bid_trades': 3},
        37744: {'ask_qty': 0.008,
                'ask_trades': 3,
                'bid_qty': 11.7304, # huge bid volume at 37744 level
                'bid_trades': 6},
        37745: {'ask_qty': 0.006,
                'ask_trades': 2,
                'bid_qty': 0.006,
                'bid_trades': 2},
        # ...
        37847: {'ask_qty': 0,
                'ask_trades': 0,
                'bid_qty': 1.052,
                'bid_trades': 1}
    },
},

Anyway, freqtrade is magic tool with your development and help!

@xmatthias xmatthias added the Discussion Proposals which should be discussed before working on it. label Jul 10, 2022
@gmediatech
Copy link

Hi @fridary,
I can see your job is amazing. I'm a day futures trader specialized in treasury bonds.

For a few years, I follow cryptocurrency as well.

I would automate trading with cryptocurrency, I want to create a simple crypto trading bot working with volume and order flow. I think your code can be very useful for me if you like to share it with me.

@fridary
Copy link
Author

fridary commented Sep 1, 2022

@gmediatech there is a lot of scripts to join in order to use clusters in freqtrade, it's hard just to give you a copy and use. Do you know python? What exchange do you use? If binance, ticks can be downloaded from https://data.binance.vision/, then from that data you make candles, sum bid/ask and get clusters. Ask questions if you have.

@dolphyvn
Copy link

@fridary that look promising. data from data.binance.vision is not live data right? does it allow you to have realtime footprint data?

@fridary
Copy link
Author

fridary commented Sep 12, 2022

@dolphyvn No, of course. data.binance.vision is perfect for backtesting and easy to paste to freqtrade populate_indicators() (load csv files and compute separately footprint). First I would suggest try it, then write custom script to get real time ticks from binance API or web sockets. Custom script can get real time data, save it to txt file (already computed footprint), then populate_indicators() take it.

@TheJoeSchr
Copy link
Contributor

@fridary do you have a branch with changes you made to get VP into Freqtrade?

I just recently learned more about them, but I think it would be very valuable tool to have in Freqtrade, so I would be willing to work this into useable PRs so it's easier to understand and merge

@fridary
Copy link
Author

fridary commented Oct 28, 2022

@TheJoeSchr The only possible way if @xmatthias allows to add this, when ccxt will have socket streams to get tick data. Now I use custom scripts that get it to files, then somehow freqtrade loads it and calculate clusters by that tick data.

@TheJoeSchr
Copy link
Contributor

I also found some example scripts from the official binance github, for testing and experimenting:

https://github.com/binance/binance-public-data/tree/master/python

@vasyacullin
Copy link

@fridary Hello. And can you give me a copy, for a long time I also wanted to do a cluster search?

@ForrestDevs
Copy link

@fridary Hey I'm an undergrad CS, working on quant trading algorithms, interested in using order-flow with my TA algo, would you mind sharing the code for footprints in freq trade? Thanks!

@fridary
Copy link
Author

fridary commented Dec 18, 2022

@Lukester45 @vasyacullin @TheJoeSchr @dolphyvn @gmediatech @xmatthias
I share my code and suppose many other people checking this topics for volume profile analyzes, hope it will help someone to improve profits and earn money.
Nonetheless, I'm still looking for new good strategies, because I know it can make x10 more profits, so if someone wants to create small community about volume profile trading and share strategies, welcome and hit my telegram: @fridary
All code below is structured for backtest, for real-time you need to create new functions and rewrite.

First file is bidask_api.py. It downloads from http or by API tick data. I found only 'binance', 'binance_futures', 'lbank2', 'ftx', 'kraken', 'hitbtc', 'bitfinex2', 'bitmex', 'bybit', 'gateio', 'gateio_futures', 'huobi', 'okx', 'poloniex' exchanges allow to download historical data.
clickhouse_driver is Clickhouse python module, it's great database that I use to store real-time tick data, but now you can just skip it.

#
# telegram: @fridary
#
import asyncio
import random
import time
import shutil
import os
from os import listdir
from os.path import isfile, join
from datetime import datetime
from dateutil.relativedelta import relativedelta
import ccxt.async_support as ccxt
from pprint import pprint
import pandas as pd
from urllib.request import urlopen, Request
import gzip
from zipfile import ZipFile
from io import BytesIO


#proxy = ['http://user:pass@ip:port']
proxy = [False]
ua = ['Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.45 Safari/537.36','Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:94.0) Gecko/20100101 Firefox/94.0','Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.55 Safari/537.36','Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.93 Safari/537.36','Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:95.0) Gecko/20100101 Firefox/95.0','Mozilla/5.0 (Windows NT 10.0; rv:91.0) Gecko/20100101 Firefox/91.0','Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/15.1 Safari/605.1.15','Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.110 Safari/537.36','Mozilla/5.0 (X11; Linux x86_64; rv:94.0) Gecko/20100101 Firefox/94.0','Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.93 Safari/537.36','Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:94.0) Gecko/20100101 Firefox/94.0','Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.69 Safari/537.36','Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:94.0) Gecko/20100101 Firefox/94.0','Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.69 Safari/537.36','Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.45 Safari/537.36','Mozilla/5.0 (X11; Linux x86_64; rv:95.0) Gecko/20100101 Firefox/95.0','Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:95.0) Gecko/20100101 Firefox/95.0','Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:95.0) Gecko/20100101 Firefox/95.0','Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.55 Safari/537.36 Edg/96.0.1054.34','Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.55 Safari/537.36 Edg/96.0.1054.43','Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.45 Safari/537.36','Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/15.0 Safari/605.1.15','Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.110 Safari/537.36','Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.45 Safari/537.36','Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0','Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.69 Safari/537.36 OPR/81.0.4196.60','Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.45 Safari/537.36 Edg/96.0.1054.29','Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.1.2 Safari/605.1.15','Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.69 Safari/537.36','Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.93 Safari/537.36 Edg/96.0.1054.53','Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Firefox/91.0','Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.93 Safari/537.36','Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:94.0) Gecko/20100101 Firefox/94.0','Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.55 Safari/537.36 Edg/96.0.1054.41','Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.110 Safari/537.36 Edg/96.0.1054.57','Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.69 Safari/537.36 Edg/95.0.1020.53','Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.45 Safari/537.36','Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:94.0) Gecko/20100101 Firefox/94.0','Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.69 Safari/537.36 OPR/81.0.4196.61','Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.54 Safari/537.36','Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/15.1 Safari/605.1.15','Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.45 Safari/537.36 OPR/82.0.4227.23','Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:96.0) Gecko/20100101 Firefox/96.0','Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/15.2 Safari/605.1.15','Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.45 Safari/537.36','Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.93 Safari/537.36','Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.54 Safari/537.36','Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:93.0) Gecko/20100101 Firefox/93.0','Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.55 Safari/537.36','Mozilla/5.0 (Windows NT 10.0; rv:78.0) Gecko/20100101 Firefox/78.0','Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.81 Safari/537.36','Mozilla/5.0 (X11; CrOS x86_64 14150.87.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.124 Safari/537.36','Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Firefox/91.0','Mozilla/5.0 (X11; Linux x86_64; rv:93.0) Gecko/20100101 Firefox/93.0','Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.1.1 Safari/605.1.15','Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.131 Safari/537.36','Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.71 Safari/537.36','Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.81 Safari/537.36 OPR/80.0.4170.63']



async def fetch_tickers():
    return await asyncio.gather(*(asyncio.ensure_future(safe_trader(p, i)) for i, p in enumerate(proxy * 1)))

async def safe_trader(proxy, _id):
    async with sem:
        global dates, processing, symbol, mode, client, db_name

        _symbol = symbol
    
        exchange = getattr(ccxt, exchange_id.replace('_futures', ''))({
            'verbose': False,
            'enableRateLimit': False,
            'rateLimit': 2 * 1000,
            'timeout': 10 * 1000,
        })
        if proxy != False: exchange.aiohttp_proxy = proxy
        exchange.userAgent = random.choice(ua)

        if exchange_id.endswith('_futures'):
            exchange.options['defaultType'] = 'future'

        if not exchange.has['fetchTrades']:
            exit('no fetchTrades')
        

        if exchange.id in ['binance', 'binance_futures']:
            #exchange.options['fetchTradesMethod'] = 'publicGetTrades'
            exchange.options['fetchTradesMethod'] = 'publicGetAggTrades'

        limit = 1000
        if exchange.id in ['binance', 'binance_futures']:
            limit = 1000
        if exchange.id == 'lbank2':
            limit = 600
        if exchange.id == 'ftx':
            limit = 5000
        if exchange.id == 'kraken':
            limit = None
            _symbol = _symbol.replace('USDT', 'USD')
        if exchange.id == 'hitbtc':
            limit = 1000
        if exchange.id == 'bitfinex2':
            limit = 5000
        if exchange.id == 'bitmex':
            limit = 1000
            _symbol += ':USDT'
        if exchange.id == 'poloniex':
            limit = 1000
        


        while len(dates) or any([x == True for x in processing]):

            if not len(dates):
                await asyncio.sleep(10)
                continue

            processing[_id] = True
            take_date = dates[0]
            del dates[0]

            print('#{} {} {}'.format(_id, proxy, take_date))

            since = exchange.parse8601('{}T00:00:00Z'.format(take_date))
            current_date = datetime.utcfromtimestamp(since / 1000)
            next_day = current_date + relativedelta(days=1)

            all_trades = []
            end_time = exchange.parse8601('{}T00:00:00Z'.format(next_day)) # ftx,poloniex only
            print('#{} fetchin {}'.format(_id, current_date.strftime('%Y-%m-%d')))
            while since < exchange.parse8601(next_day.strftime('%Y-%m-%dT00:00:00Z')):
                c = 0
                while 1:
                    try:
                        params = {}
                        if exchange.id == 'ftx': params['end_time'] = int(end_time) / 1000
                        if exchange.id == 'poloniex': params['end'] = int(end_time) / 1000
                        trades = await exchange.fetch_trades(_symbol, since, limit, params)
                        break
                    except ccxt.BaseError as e:
                        c += 1
                        if c == 3:
                            print('#{}'.format(_id), 'REMOVED', proxy)
                            dates.append(take_date)
                            await exchange.close()
                            processing[_id] = False
                            return
                        print('#{}'.format(_id), datetime.now().strftime('%H:%M:%S'), type(e).__name__, proxy, str(e), str(e.args))
                        if type(e).__name__ == 'DDoSProtection':
                            await asyncio.sleep(10 * c)
                        elif type(e).__name__ == 'BadSymbol':
                            await exchange.close()
                            processing[_id] = False
                            return
                        else:
                            await asyncio.sleep(4 * c)



                if exchange.id not in ['ftx', 'poloniex']:
                    print('  #{} since {}, trades={}, {} {}'.format(_id, datetime.utcfromtimestamp(since / 1000).strftime('%Y-%m-%d %H:%M:%S'), len(trades), exchange_id, symbol))
                else:
                    print('  #{} end_time {}, trades={}, {} {}'.format(_id, datetime.utcfromtimestamp(end_time / 1000).strftime('%Y-%m-%d %H:%M:%S'), len(trades), exchange_id, symbol))
                if not len(trades): break
                if 1 <= len(trades) <= 5 and exchange.id == 'poloniex': # trades bug
                    break

                if next_day.day == datetime.utcfromtimestamp(trades[len(trades) - 1]['timestamp'] / 1000).day:
                    for trade in trades[:]:
                        if next_day.day == datetime.utcfromtimestamp(trade['timestamp'] / 1000).day:
                            trades.remove(trade)
                    all_trades += trades
                    if exchange.id not in ['ftx', 'poloniex']:
                        break
                else:
                    if exchange.id not in ['ftx', 'poloniex']:
                        since = trades[len(trades) - 1]['timestamp'] + 1
                    all_trades += trades
                
                end_time = trades[0]['timestamp'] - 1
            
            since = exchange.parse8601(next_day.strftime('%Y-%m-%dT00:00:00Z'))

            if not all_trades:
                print('#{} date {} is empty on {}'.format(_id, take_date, exchange_id))
            elif take_date != datetime.utcfromtimestamp(all_trades[0]['timestamp'] / 1000).strftime('%Y-%m-%d') or take_date != datetime.utcfromtimestamp(all_trades[-1]['timestamp'] / 1000).strftime('%Y-%m-%d'):
                print('#{} {} or {} != {} on {}'.format(_id, datetime.utcfromtimestamp(all_trades[0]['timestamp'] / 1000).strftime('%Y-%m-%d'), datetime.utcfromtimestamp(all_trades[-1]['timestamp'] / 1000).strftime('%Y-%m-%d'), take_date, exchange.id))
            else:
                df = pd.DataFrame(all_trades)
                df['datetime'] = pd.to_datetime(df['datetime'])
                df.index = df['datetime']
                df.sort_index(inplace=True)

                if mode == None:
                    df = df[['timestamp', 'amount', 'price', 'side', 'type', 'takerOrMaker']]
                    df.to_csv('{}/{}/{}/{}.csv'.format(path, exchange_id, symbol.replace('/', '_'), current_date.strftime('%Y-%m-%d')))
                elif mode == 'fill_missing':
                    df = df[['datetime', 'id', 'timestamp', 'amount', 'price', 'side', 'type', 'takerOrMaker']]
                    df.rename(columns={'datetime': 'date'}, inplace=True)
                    df['id'] = df['id'].astype(int)
                    ids = client.execute(f"SELECT id from {db_name}.{exchange_id}_{symbol.replace('/', '_')} \
                        WHERE date >= '{take_date}' and date < '{datetime.fromisoformat(take_date)+relativedelta(days=1)}'")
                    if len(ids):
                        ids = list(list(zip(*ids))[0])
                        df = df[~df['id'].isin(ids)]
                    print(f'new df on {take_date}:')
                    print(df)
                    client.insert_dataframe(f"INSERT INTO {db_name}.{exchange_id}_{symbol.replace('/', '_')} VALUES", df)


            processing[_id] = False


        await exchange.close()


def download_url(ex, symbol, d, path):
    if ex == 'bybit':
        url = f"https://public.bybit.com/trading/{symbol.replace('/', '')}/{symbol.replace('/', '')}{d}.csv.gz"
    if ex == 'binance':
        url = f"https://data.binance.vision/data/spot/monthly/aggTrades/{symbol.replace('/', '')}/{symbol.replace('/', '')}-aggTrades-{d}.zip"
    if ex == 'binance_futures':
        url = f"https://data.binance.vision/data/futures/um/monthly/aggTrades/{symbol.replace('/', '')}/{symbol.replace('/', '')}-aggTrades-{d}.zip"
    if ex == 'gateio':
        url = f"https://download.gatedata.org/spot/deals/{d.replace('-', '')}/{symbol.replace('/', '_')}-{d.replace('-', '')}.csv.gz"
    if ex == 'gateio_futures':
        url = f"https://download.gatedata.org/futures_usdt/trades/{d.replace('-', '')}/{symbol.replace('/', '_')}-{d.replace('-', '')}.csv.gz" # https://www.gate.io/developer/historical_quotes
    if ex == 'huobi':
        d_next = (datetime.fromisoformat(d) + relativedelta(days=1)).strftime("%Y-%m-%d")
        url = f"https://futures.huobi.com/data/trades/spot/daily/{symbol.replace('/', '')}/{symbol.replace('/', '')}-trades-{d}.zip" # https://futures.huobi.com/vision/?prefix=data/trades/
        url_next = f"https://futures.huobi.com/data/trades/spot/daily/{symbol.replace('/', '')}/{symbol.replace('/', '')}-trades-{d_next}.zip"
    if ex == 'okx':
        d_next = (datetime.fromisoformat(d) + relativedelta(days=1)).strftime("%Y-%m-%d")
        url = f"https://static.okx.com/cdn/okex/traderecords/trades/daily/{d.replace('-', '')}/{symbol.replace('/', '-')}-trades-{d}.zip" # https://www.okx.com/ru/data-download
        url_next = f"https://static.okx.com/cdn/okex/traderecords/trades/daily/{d_next.replace('-', '')}/{symbol.replace('/', '-')}-trades-{d_next}.zip"


    print(f'Downloading {ex} {symbol} {d}: {url}')

    try:
        if ex in ['bybit', 'gateio', 'gateio_futures']:
            with urlopen(Request(url, data=None, headers={'User-Agent': random.choice(ua)})) as response:
                with gzip.GzipFile(fileobj=response) as uncompressed:
                    file_content = uncompressed.read()
            with open(f"{path}/{ex}/{symbol.replace('/', '_')}/{d}.csv", 'wb') as f:
                f.write(file_content)

        if ex in ['binance', 'binance_futures']:
            with urlopen(Request(url, data=None, headers={'User-Agent': random.choice(ua)})) as response:
                with ZipFile(BytesIO(response.read())) as zipfile:
                    zipfile.extractall(f"{path}/{ex}/{symbol.replace('/', '_')}")

        if ex in ['huobi', 'okx']:
            if not os.path.exists(f"{path}/{ex}/tmp"): os.makedirs(f"{path}/{ex}/tmp")
            with urlopen(Request(url, data=None, headers={'User-Agent': random.choice(ua)})) as response:
                with ZipFile(BytesIO(response.read())) as zipfile:
                    zipfile.extractall(f"{path}/{ex}/tmp")
            with urlopen(Request(url_next, data=None, headers={'User-Agent': random.choice(ua)})) as response:
                with ZipFile(BytesIO(response.read())) as zipfile:
                    zipfile.extractall(f"{path}/{ex}/tmp")

            if ex == 'huobi':
                file = '{}-trades-{}.csv'.format(symbol.replace('/', ''), d)
                xf = pd.read_csv(f"{path}/{ex}/tmp/{file}", names=['id','time','price','qty','side'], header=None)
                file_next = '{}-trades-{}.csv'.format(symbol.replace('/', ''), d_next)
                xf_next = pd.read_csv(f"{path}/{ex}/tmp/{file_next}", names=['id','time','price','qty','side'], header=None)
            if ex == 'okx':
                file = '{}-trades-{}.csv'.format(symbol.replace('/', '-'), d)
                xf = pd.read_csv(f"{path}/{ex}/tmp/{file}", names=['id','side','qty','price','time'], header=None)
                file_next = '{}-trades-{}.csv'.format(symbol.replace('/', '-'), d_next)
                xf_next = pd.read_csv(f"{path}/{ex}/tmp/{file_next}", names=['id','side','qty','price','time'], header=None)

            xf['_time'] = pd.to_datetime(xf['time'], unit='ms')
            xf_next['_time'] = pd.to_datetime(xf_next['time'], unit='ms')
            xf = pd.concat([xf, xf_next], ignore_index=True, sort=False)
            xf.sort_values('_time', ascending=True, inplace=True)
            xf = xf.loc[(xf['_time'] >= d) & (xf['_time'] < d_next)]
            xf.drop(['_time'], axis=1, inplace=True)
            xf.to_csv(f"{path}/{ex}/{symbol.replace('/', '_')}/{file}", header=False, index=False)
            shutil.rmtree(f"{path}/{ex}/tmp", ignore_errors=True)

    except Exception as e:
        print(e)



if __name__ == '__main__':


    path = '/disk_sdb/bidask'

    start_date = '2022-09-01'
    end_date = '2022-12-01'

    # fill_missing is for Clickhouse db to download missed data
    mode = None # fill_missing|None
    if mode == 'fill_missing':
        from clickhouse_driver import Client
        db_name = 'trades'
        client = Client(host='localhost', user='default', password='12345', settings={'use_numpy': True})
    else:
        client = None


    #for symbol in ['BTC/USDT', 'ETH/USDT', 'SOL/USDT', 'XRP/USDT', 'ADA/USDT', 'LTC/USDT', 'ATOM/USDT', 'LINK/USDT', 'AVAX/USDT', 'MATIC/USDT', 'NEAR/USDT']:
    for symbol in ['DOGE/USDT']:

        #for ex in ['binance', 'binance_futures', 'lbank2', 'ftx', 'kraken', 'hitbtc', 'bitfinex2', 'bitmex', 'bybit', 'gateio', 'gateio_futures', 'huobi', 'okx', 'poloniex']:
        for ex in ['binance_futures']:

            exchange_id = ex
            print('getting {} {}...'.format(exchange_id, symbol))

            dates = []
            d = datetime.fromisoformat(start_date)
            while d < datetime.fromisoformat(end_date):
                if mode != 'fill_missing' and ex in ['binance', 'binance_futures', 'gateio', 'gateio_futures']: d_ = d.strftime("%Y-%m")
                else: d_ = d.strftime("%Y-%m-%d")
                if d_ not in dates: dates.append(d_)
                d += relativedelta(days=1)
            

            if mode != 'fill_missing':
                if not os.path.exists('{}/{}/{}'.format(path, exchange_id, symbol.replace('/', '_'))):
                    os.makedirs('{}/{}/{}'.format(path, exchange_id, symbol.replace('/', '_')))
                
                full_path = '{}/{}/{}'.format(path, exchange_id, symbol.replace('/', '_'))
                files = [f for f in listdir(full_path) if isfile(join(full_path, f))]
                files = list(filter(lambda x: not x.startswith('df_') and x != '.DS_Store', files))
                for i, f in enumerate(files):
                    f = f.replace('.csv', '').replace(symbol.replace('/', ''), '').split('-')
                    for f_ in f[:]:
                        if not f_.isdigit():
                            f.remove(f_)
                    files[i] = '-'.join(f)
                
                
                dates = [x for x in dates if x not in files]
                print('dates', dates)
                if not len(dates): continue
            else:
                print('dates', dates)


            if mode != 'fill_missing' and ex in ['bybit', 'binance', 'binance_futures', 'gateio', 'gateio_futures', 'huobi', 'okx']:
                for d in dates:
                    download_url(ex, symbol, d, path)
            else:

                start_time = time.time()
                processing = [False for i in range(len(proxy) * 1)]
                sem = asyncio.Semaphore(len(proxy) * 1)
                loop = asyncio.get_event_loop()
                loop.run_until_complete(fetch_tickers())
                print("%.1f seconds" % (time.time() - start_time))

Now we downloaded historical tick data, let's create candle data in csv format. For new experience I choose renko format (https://help.quantower.com/analytics-panels/chart/chart-types/renko), but you can change to candletick. Renko is time independent, so if price moves more or less % value (brick=0.3% in code below), we get new candle. bidask.py:

#
# telegram: @fridary
#

import numpy as np
import pandas as pd
from multiprocessing import Pool, cpu_count, current_process
from os import listdir
from os.path import isfile, join
import time
from datetime import datetime
from dateutil.relativedelta import relativedelta
from pprint import pprint



def get(file, next_day, path, exchange, params, symbol, chart, is_root):

    global df_binance_f

    print('reading {}/{}/{}...'.format(exchange, symbol.replace('/', '_'), file))
    
    aggregate_trades = False

    if exchange == 'binance' or exchange == 'binance_futures':

        # trades
        #df = pd.read_csv(path + '/' + file, names=['trade_id','price','qty','quoteQty','time','isBuyerMaker','isBestMatch'], header=None)

        # aggr trades
        df = pd.read_csv(path + '/' + file, names=['agg_trade_id','price','qty','first_trade_id','last_trade_id','time','isBuyerMaker','isBestMatch'], header=None)

        df['time'] = pd.to_datetime(df['time'], unit='ms')
        df.drop(['trade_id', 'agg_trade_id', 'first_trade_id', 'last_trade_id', 'quoteQty', 'isBestMatch'], axis=1, inplace=True, errors='ignore')

    elif exchange == 'bybit':
        df = pd.read_csv(path + '/' + file)
        df['isBuyerMaker'] = np.select([(df['side'] == 'Sell')], [True], default=False)
        df['timestamp'] = pd.to_datetime(df['timestamp'], unit='s')
        df.drop(['symbol', 'side', 'tickDirection', 'trdMatchID', 'grossValue', 'homeNotional', 'foreignNotional'], axis=1, inplace=True)
        df.rename(columns={'timestamp': 'time', 'size': 'qty'}, inplace=True)

    elif exchange == 'gateio':
        df = pd.read_csv(path + '/' + file, names=['Timestamp','Dealid','Price','Amount','Side'], header=None)
        df['isBuyerMaker'] = np.select([(df['Side'] == 1)], [True], default=False) # 1 == Sell
        df['timestamp'] = pd.to_datetime(df['Timestamp'], unit='s')
        df.drop(['Dealid', 'Side'], axis=1, inplace=True)
        df.rename(columns={'timestamp': 'time', 'Price': 'price', 'Amount': 'qty'}, inplace=True)

    elif exchange == 'gateio_futures':
        df = pd.read_csv(path + '/' + file, names=['Timestamp','Dealid','Price','Amount'], header=None)
        df['isBuyerMaker'] = np.select([(df['Amount'] < 0)], [True], default=False)
        df['timestamp'] = pd.to_datetime(df['Timestamp'], unit='s')
        df.drop(['Dealid'], axis=1, inplace=True)
        df.rename(columns={'timestamp': 'time', 'Price': 'price', 'Amount': 'qty'}, inplace=True)
        df['qty'] = df['qty'].apply(lambda x: abs(x))

    elif exchange == 'huobi':
        df = pd.read_csv(path + '/' + file, names=['id','time','price','qty','side'], header=None)
        df['isBuyerMaker'] = np.select([(df['side'] == 'sell')], [True], default=False)
        df['time'] = pd.to_datetime(df['time'], unit='ms')
        df.drop(['id', 'side'], axis=1, inplace=True)

    elif exchange == 'okx':
        df = pd.read_csv(path + '/' + file, names=['id','side','qty','price','time'], header=None)
        df['isBuyerMaker'] = np.select([(df['side'] == 'SELL')], [True], default=False)
        df['time'] = pd.to_datetime(df['time'], unit='ms')
        df.drop(['id', 'side'], axis=1, inplace=True)

    else:
        if not next_day:
            df = pd.read_csv(path + '/' + file)
        else:
            df = pd.concat(map(pd.read_csv, [path + '/' + file, path + '/' + next_day]), ignore_index=True)
        #df = pd.read_csv(path + '/' + file)
        df['isBuyerMaker'] = np.select([(df['side'] == 'sell')], [True], default=False)
        df['timestamp'] = pd.to_datetime(df['timestamp'], unit='ms')
        df.drop(['datetime', 'side', 'type', 'takerOrMaker'], axis=1, inplace=True)
        df.rename(columns={'timestamp': 'time', 'amount': 'qty'}, inplace=True)


    df.set_index('time', inplace=True, drop=False)
    if type(df.index) != pd.core.indexes.datetimes.DatetimeIndex:
        exit('err df.index={}'.format(type(df.index)))
    df.sort_index(inplace=True)

    print(' looping {}/{}/{}...'.format(exchange, symbol.replace('/', '_'), file))

    return calc_dfn(df, aggregate_trades, exchange, params, symbol, chart, is_root)



def calc_dfn(df, aggregate_trades, exchange, params, symbol, chart, is_root):
    global df_binance_f


    if chart == 'renko':
        if symbol == 'BTC/USDT':
            tick = 0.1
        if symbol == 'ATOM/USDT':
            tick = 0.01
        if symbol == 'SOL/USDT':
            tick = 0.01

    if chart == 'volume':
        if symbol == 'BTC/USDT':
            volume_size = 200


    try:
        df_binance_f_indexes = df_binance_f['datetime'].tolist()
    except NameError:
        pass

    min_candle_seconds = 2
    dfn_g = {}
    start_time = time.time()
    df = df.to_dict(orient="records")
    key = None
    prevBrickOpen, prevBrickHigh, prevBrickLow, prevBrickClose, prevBrickTime = None, None, None, None, None
    start_time = time.time()
    for i, row in enumerate(df):


        price = row['price']
        if symbol == 'ADA/USDT':
            price = round(row['price'], 4)


        if chart == 'volume':
            if i == 0 or dfn_g[key]['qty'] + row['qty'] > volume_size:
                key = row['time']
                dfn_g[key] = {
                    'open': price,
                    'high': price,
                    'low': price,
                    'close': price,
                    'qty': row['qty'],
                    'trades': 1,
                    'foot': {},
                }
                if aggregate_trades: dfn_g[key]['trades_aggr'] = 1
            else:
                dfn_g[key]['high'] = price if price > dfn_g[key]['high'] else dfn_g[key]['high']
                dfn_g[key]['close'] = price
                dfn_g[key]['low'] = price if price < dfn_g[key]['low'] else dfn_g[key]['low']
                dfn_g[key]['qty'] += row['qty']
                dfn_g[key]['trades'] += 1
                if aggregate_trades and not(df[i-1]['time'] == row['time'] and df[i-1]['price'] == price and df[i-1]['isBuyerMaker'] == row['isBuyerMaker']):
                    dfn_g[key]['trades_aggr'] += 1


        if chart == 'renko_tick':
            # https://help.quantower.com/analytics-panels/chart/chart-types/renko
            if i == 0 \
                or len(dfn_g) == 1 and abs(price - dfn_g[key]['open']) > params['brick'] * tick \
                or len(dfn_g) > 1 \
                    and (row['time'] - key).total_seconds() > min_candle_seconds \
                    and (price > prevBrickClose and prevBrickClose > prevBrickOpen and price - prevBrickClose > params['brick'] * tick \
                        or price > prevBrickClose and prevBrickClose < prevBrickOpen and price - prevBrickClose > 2 * params['brick'] * tick \
                        or price < prevBrickClose and prevBrickClose < prevBrickOpen and prevBrickClose - price > params['brick'] * tick \
                        or price < prevBrickClose and prevBrickClose > prevBrickOpen and prevBrickClose - price > 2 * params['brick'] * tick):

                if len(dfn_g) > 0:
                    prevBrickOpen, prevBrickHigh, prevBrickLow, prevBrickClose, prevBrickTime = dfn_g[key]['open'], dfn_g[key]['high'], dfn_g[key]['low'], dfn_g[key]['close'], key
                key = row['time']
                dfn_g[key] = {
                    'open': price,
                    'high': price,
                    'low': price,
                    'close': price,
                    'qty': row['qty'],
                    'trades': 1,
                    'foot': {},
                }
                if aggregate_trades: dfn_g[key]['trades_aggr'] = 1

            else:
                dfn_g[key]['high'] = price if price > dfn_g[key]['high'] else dfn_g[key]['high']
                dfn_g[key]['close'] = price
                dfn_g[key]['low'] = price if price < dfn_g[key]['low'] else dfn_g[key]['low']
                dfn_g[key]['qty'] += row['qty']
                dfn_g[key]['trades'] += 1
                if aggregate_trades and not(df[i-1]['time'] == row['time'] and df[i-1]['price'] == price and df[i-1]['isBuyerMaker'] == row['isBuyerMaker']):
                    dfn_g[key]['trades_aggr'] += 1
        

        if chart == 'renko_perc':


            if exchange == 'binance_futures' and not is_root:

                if i == 0 \
                    or len(dfn_g) == 1 and abs(100 - 100 * price / dfn_g[key]['open']) > params['brick'] \
                    or len(dfn_g) > 1 \
                        and (row['time'] - key).total_seconds() > min_candle_seconds \
                        and (price > prevBrickClose and prevBrickClose >= prevBrickOpen and 100 * price / prevBrickClose - 100 > params['brick'] \
                            or price > prevBrickClose and prevBrickClose <= prevBrickOpen and 100 * price / prevBrickClose - 100 > 2 * params['brick'] \
                            or price < prevBrickClose and prevBrickClose <= prevBrickOpen and 100 - 100 * price / prevBrickClose > params['brick'] \
                            or price < prevBrickClose and prevBrickClose >= prevBrickOpen and 100 - 100 * price / prevBrickClose > 2 * params['brick']):
                            

                    if len(dfn_g) > 0:
                        prevBrickOpen, prevBrickHigh, prevBrickLow, prevBrickClose, prevBrickTime = dfn_g[key]['open'], dfn_g[key]['high'], dfn_g[key]['low'], dfn_g[key]['close'], key
                    key = row['time']
                    dfn_g[key] = {
                        'open': price,
                        'high': price,
                        'low': price,
                        'close': price,
                        'qty': row['qty'],
                        'trades': 1,
                        'foot': {},
                    }
                    if aggregate_trades: dfn_g[key]['trades_aggr'] = 1

                else:
                    dfn_g[key]['high'] = price if price > dfn_g[key]['high'] else dfn_g[key]['high']
                    dfn_g[key]['close'] = price
                    dfn_g[key]['low'] = price if price < dfn_g[key]['low'] else dfn_g[key]['low']
                    dfn_g[key]['qty'] += row['qty']
                    dfn_g[key]['trades'] += 1
                    if aggregate_trades and not(df[i-1]['time'] == row['time'] and df[i-1]['price'] == price and df[i-1]['isBuyerMaker'] == row['isBuyerMaker']):
                        dfn_g[key]['trades_aggr'] += 1

            else:

                try:
                    loc_ = df_binance_f.index.get_indexer([row['time']], method='ffill')[0]
                    key = df_binance_f_indexes[loc_]
                    if key == -1:
                        continue
                except KeyError:
                    if exchange not in ['binance', 'binance_futures', 'gateio', 'gateio_futures']:
                        break
                    continue

                if exchange in ['binance', 'binance_futures', 'gateio', 'gateio_futures'] and df[0]['time'].strftime('%Y-%m') != key.strftime('%Y-%m') \
                    or exchange not in ['binance', 'binance_futures', 'gateio', 'gateio_futures'] and df[0]['time'].strftime('%Y-%m-%d') != key.strftime('%Y-%m-%d'):
                    continue
                
                if loc_ + 1  == len(df_binance_f) and row['time'].strftime('%Y-%m-%d') != key.strftime('%Y-%m-%d'):
                    break
                
                key = str(key)

                if key not in dfn_g:
                    dfn_g[key] = {
                        'open': price,
                        'high': price,
                        'low': price,
                        'close': price,
                        'qty': row['qty'],
                        'trades': 1,
                        'foot': {},
                    }
                    if aggregate_trades: dfn_g[key]['trades_aggr'] = 1
                else:
                    dfn_g[key]['high'] = price if price > dfn_g[key]['high'] else dfn_g[key]['high']
                    dfn_g[key]['close'] = price
                    dfn_g[key]['low'] = price if price < dfn_g[key]['low'] else dfn_g[key]['low']
                    dfn_g[key]['qty'] += row['qty']
                    dfn_g[key]['trades'] += 1
                    if aggregate_trades and not(df[i-1]['time'] == row['time'] and df[i-1]['price'] == price and df[i-1]['isBuyerMaker'] == row['isBuyerMaker']):
                        dfn_g[key]['trades_aggr'] += 1



        if chart == 'candlestick':

            interval = params['interval']
            if 'm' in interval:
                interval_ = int(interval.replace('m', ''))
            else:
                exit('interval err')
            minute = row['time'].minute // interval_ * interval_
            key = '{}-{:02d}-{:02d} {:02d}:{:02d}:00'.format(row['time'].year, row['time'].month, row['time'].day, row['time'].hour, minute)

            if key not in dfn_g:
                dfn_g[key] = {
                    'open': price,
                    'high': price,
                    'low': price,
                    'close': price,
                    'qty': row['qty'],
                    #'trades': 1,
                    'foot': {},
                }
            else:
                dfn_g[key]['high'] = price if price > dfn_g[key]['high'] else dfn_g[key]['high']
                dfn_g[key]['close'] = price
                dfn_g[key]['low'] = price if price < dfn_g[key]['low'] else dfn_g[key]['low']
                dfn_g[key]['qty'] += row['qty']


        level = row['price']
        if symbol == 'BTC/USDT':
            level = round(row['price'])
        if symbol == 'ETH/USDT':
            level = round(row['price'], 1)
        if symbol == 'ATOM/USDT':
            level = round(row['price'], 3)
        if symbol == 'SOL/USDT':
            level = round(row['price'], 2)
        if symbol == 'ADA/USDT':
            level = round(row['price'], 4)

        if level not in dfn_g[key]['foot']:
            if row['isBuyerMaker']:
                dfn_g[key]['foot'][level] = {
                    'bid_qty': row['qty'],
                    'bid_trades': 1,
                    'ask_qty': 0,
                    'ask_trades': 0,
                }
                if aggregate_trades:
                    dfn_g[key]['foot'][level]['bid_trades_aggr'] = 1
                    dfn_g[key]['foot'][level]['ask_trades_aggr'] = 0
            else:
                dfn_g[key]['foot'][level] = {
                    'bid_qty': 0,
                    'bid_trades': 0,
                    'ask_qty': row['qty'],
                    'ask_trades': 1,
                }
                if aggregate_trades:
                    dfn_g[key]['foot'][level]['bid_trades_aggr'] = 0
                    dfn_g[key]['foot'][level]['ask_trades_aggr'] = 1
        else:
            if row['isBuyerMaker']:
                dfn_g[key]['foot'][level]['bid_qty'] += row['qty']
                dfn_g[key]['foot'][level]['bid_trades'] += 1
                if aggregate_trades and not(df[i-1]['time'] == row['time'] and df[i-1]['price'] == price and df[i-1]['isBuyerMaker'] == row['isBuyerMaker']):
                    dfn_g[key]['foot'][level]['bid_trades_aggr'] += 1
            else:
                dfn_g[key]['foot'][level]['ask_qty'] += row['qty']
                dfn_g[key]['foot'][level]['ask_trades'] += 1
                if aggregate_trades and not(df[i-1]['time'] == row['time'] and df[i-1]['price'] == price and df[i-1]['isBuyerMaker'] == row['isBuyerMaker']):
                    dfn_g[key]['foot'][level]['ask_trades_aggr'] += 1


    print(" ended looping for %s in %.1fm" % (df[0]['time'].strftime('%Y-%m-%d'), (time.time() - start_time) / 60))

    return dfn_g







def realtime(exchange, params, symbol, chart):
    global path_realtime, db_name, client

    df = client.query_dataframe(f"SELECT * from {db_name}.{exchange}_{symbol.replace('/', '_')} \
                WHERE date >= '{(datetime.now()-relativedelta(days=11)).strftime('%Y-%m-%d')}' ORDER BY id ASC")

    df['isBuyerMaker'] = np.select([(df['side'] == 'sell')], [True], default=False)
    df['timestamp'] = pd.to_datetime(df['timestamp'], unit='ms')
    df.drop(['date', 'side', 'type', 'takerOrMaker'], axis=1, inplace=True)
    df.rename(columns={'timestamp': 'time', 'amount': 'qty'}, inplace=True)
    df.set_index('time', inplace=True, drop=False)
    if type(df.index) != pd.core.indexes.datetimes.DatetimeIndex:
        exit('err df.index={}'.format(type(df.index)))
    df.sort_index(inplace=True)

    dfn_g = calc_dfn(df, False, exchange, params, symbol, chart)

    df = pd.DataFrame.from_dict(dfn_g, orient='index')
    df.index = pd.to_datetime(df.index)
    df.index.name = 'datetime'
    df.sort_index(inplace=True)
    print(df)

    str_ = ''
    for k, p in params.items(): str_ += f'{k}={p},'
    str_ = str_[:-1]
    df.to_csv('{}/{}/{}/df_{}_{}.csv'.format(path_realtime, exchange, symbol.replace('/', '_'), chart, str_))
    print(f'{datetime.now()} realtime {exchange} {symbol} {params}')






if __name__ == '__main__':



    if 0:

        from clickhouse_driver import Client
        path_realtime = '/Downloads/realtime'
        db_name = 'trades'
        client = Client(host='localhost', user='default', password='12345', settings={'use_numpy': True})
        cpu = cpu_count() - 5
        cpu = 1
        pool = Pool(cpu)

        while 1:
            mass = [
                ('binance_futures', {'brick': 0.15}, 'ADA/USDT', 'renko_perc', False)
            ]
            pool.starmap_async(realtime, mass).get()
            break


    if 1:

        path_ = '/Downloads/bidask'
        chart = 'renko_perc' # candlestick|renko_tick|renko_perc|volume


        for symbol in ['DOGE/USDT']:
            #for param in [{'interval': '30m'}, {'interval': '15m'}, {'interval': '5m'}]:
            for param in [{'brick': 0.3}]:
                #for exchange in ['binance', 'binance_futures', 'lbank2', 'ftx', 'kraken', 'hitbtc', 'bitfinex2', 'bitmex', 'bybit', 'gateio', 'gateio_futures', 'huobi', 'okx', 'poloniex']:
                for exchange in ['binance_futures']:

                    if 'root' in param and param['root'] == symbol.split('/')[0]: continue

                    if exchange != 'binance_futures' or 'root' in param:
                        str_ = ''
                        for k, p in param.items():
                            if k == 'root': continue
                            str_ += f'{k}={p},'
                        str_ = str_[:-1]
                        symb_ = param['root'] + '_USDT' if 'root' in param else symbol.replace('/', '_')
                        filename = f"{path_}/binance_futures/{symb_}/df_{chart}_{str_}.csv"
                        df_binance_f = pd.read_csv(filename)
                        df_binance_f['datetime'] = pd.to_datetime(df_binance_f['datetime'])
                        df_binance_f = df_binance_f[['datetime']]
                        df_binance_f.set_index('datetime', inplace=True, drop=False)
                        print('df_binance_f:')
                        print(df_binance_f)

                    path = '{}/{}/{}'.format(path_, exchange, symbol.replace('/', '_'))
                    start_time = time.time()
                    files = [f for f in listdir(path) if isfile(join(path, f))]
                    files = list(filter(lambda x: not x.startswith('df_') and x != '.DS_Store', files))
                    files = sorted(files, key=lambda f: f)

                    print('files:')
                    print(files)

                    if 1 or 'root' in param:
                        date_start = '2022-09-01'
                        date_end = '2022-12-01'

                        if 'root' in param:
                            date_start = df_binance_f.iloc[0]['datetime'].strftime('%Y-%m-%d')
                            date_end = df_binance_f.iloc[-1]['datetime'].strftime('%Y-%m-%d')

                        date_start = datetime.fromisoformat(date_start)
                        date_end = datetime.fromisoformat(date_end)

                        for i, f in enumerate(files[:]):
                            fx = f.replace('.csv', '').replace(symbol.replace('/', ''), '').split('-')
                            for f_ in fx[:]:
                                if not f_.isdigit():
                                    fx.remove(f_)
                            fx = '-'.join(fx)

                            if 0:
                                if fx.count('-') == 2 and exchange in ['binance_futures']:
                                    files.remove(f)
                                    continue

                            if fx.count('-') == 1:
                                file_date = datetime.strptime(fx, '%Y-%m')
                                if not (file_date.strftime('%Y-%m') == date_start.strftime('%Y-%m') or file_date.strftime('%Y-%m') == date_end.strftime('%Y-%m') \
                                    or date_start <= file_date <= date_end):
                                    files.remove(f)
                            else:
                                file_date = datetime.fromisoformat(fx)
                                if file_date < date_start or file_date > date_end:
                                    files.remove(f)

                        print('files trimed:')
                        print(files)

                    if not files:
                        print(f"{symbol} {exchange} no files")
                        continue

                    cpu = cpu_count() - 8
                    if exchange in ['binance', 'binance_futures'] and symbol in ['BTC/USDT', 'ETH/USDT']: cpu = 5
                    elif exchange in ['binance', 'binance_futures']: cpu = 12
                    pool = Pool(cpu)
                    files_ = []
                    for i, f in enumerate(files):
                        if chart != 'renko_perc' or i == len(files) - 1 or exchange in ['binance', 'binance_futures', 'gateio', 'gateio_futures']: next_day = None
                        else: next_day = files[i + 1]
                        files_.append((files[i], next_day, path, exchange, param, symbol, chart, 'root' in param))
                    dfn_g_ = pool.starmap_async(get, files_).get()
                    pool.close()

                    dfn_g = {}
                    for d in dfn_g_: dfn_g.update(d)

                    df = pd.DataFrame.from_dict(dfn_g, orient='index')
                    df.index = pd.to_datetime(df.index)
                    df.index.name = 'datetime'
                    df.sort_index(inplace=True)

                    str_ = ''
                    for k, p in param.items(): str_ += f'{k}={p},'
                    str_ = str_[:-1]
                    df.to_csv('{}/df_{}_{}.csv'.format(path, chart, str_))
                    print(f'{exchange} {symbol} {param}')
                    print("%.1fm total execution (%.1f seconds)" % ((time.time() - start_time) / 60, time.time() - start_time))
                    print(df)
                    print('----')

Now, create freqtrade template to work with clusters and good visualization:

Templategit.py:

import numpy as np  # noqa
import pandas as pd  # noqa
from pandas import DataFrame
from operator import itemgetter
from dateutil.relativedelta import relativedelta

from freqtrade.strategy import IStrategy

import talib.abstract as ta
import pandas_ta as pta
import freqtrade.vendor.qtpylib.indicators as qtpylib

from technical.indicators import *



class Templategit(IStrategy):

    INTERFACE_VERSION = 3
    
    can_short: bool = False

    trailing_stop = False
    # trailing_only_offset_is_reached = False
    # trailing_stop_positive = 0.01
    # trailing_stop_positive_offset = 0.0  # Disabled / not configured

    process_only_new_candles = False

    use_exit_signal = True
    exit_profit_only = False
    ignore_roi_if_entry_signal = False

    startup_candle_count: int = 400

    ema_range = [9, 18, 50, 150]
    tema_range = [12, 44, 77, 90]
    wma_range = [20, 40, 60, 110]
    vwma_range = [30, 50, 90, 140]
    vwap_range = [15, 30, 55, 99, 150]
    vidya_range = [9, 18, 27, 45]
    alma_range = [30, 60, 100]
    hilo_range = [26, 50, 75]
    mama_range = [1.0, 1.5, 2.0]

    order_types = {
        'entry': 'limit',
        'exit': 'limit',
        'stoploss': 'market',
        'stoploss_on_exchange': False
        # 'stoploss_on_exchange_interval': 60,
        # 'stoploss_on_exchange_limit_ratio': 0.98
    }

    order_time_in_force = {
        'entry': 'gtc',
        'exit': 'gtc'
    }


    cluster_range = {'cluster_bid': {'min': None, 'max': None}, 'cluster_ask': {'min': None, 'max': None}, 'cluster_sum': {'min': None, 'max': None}}
    legend_title = '-'
    df_renko_file = None


    @property
    def plot_config(self):

        plot = {}
        plot['main_plot'] = {
            'sar': {'color': 'white', 'plotly': {'mode': 'markers', 'marker': dict(color="#93B5C6", size=3)}},
            'SUPERTl': {'color': 'green', 'plotly': {'legendgroup': 'supertrend v2', 'visible': 'legendonly', 'name': 'Supertrend v2'}},
            'SUPERTs': {'color': 'red', 'plotly': {'legendgroup': 'supertrend v2', 'visible': 'legendonly', 'showlegend': False}},
        }

        plot['main_plot'] = {**plot['main_plot'], **{
            'cluster_range': self.cluster_range,
        }}

        plot['main_plot'] = {**plot['main_plot'], **{
            'legend_title': self.legend_title,
        }}


        plot['subplots'] = {
            'Cum delta': {
                'cumulative_delta': {'color': '#333', 'plotly': {'mode': 'lines+markers', 'line': dict(color='#777', width=1), 'marker': dict(size=3), 'showlegend': False}},
            },
            'Delta': {
                'delta_pos_up': {'type': 'bar', 'color': '#59bf8d', 'plotly': {'opacity': 0.9, 'showlegend': False}},
                'delta_pos_down': {'type': 'bar', 'color': '#3e8f69', 'plotly': {'opacity': 1, 'showlegend': False}},
                'delta_neg_up': {'type': 'bar', 'color': '#ff7ea4', 'plotly': {'opacity': 0.8, 'showlegend': False}},
                'delta_neg_down': {'type': 'bar', 'color': '#cf3161', 'plotly': {'opacity': 1, 'showlegend': False}},
            },
            'Delta_ema': {
                'delta_ema_pos_up': {'type': 'bar', 'color': '#59bf8d', 'plotly': {'opacity': 0.9, 'showlegend': False}},
                'delta_ema_pos_down': {'type': 'bar', 'color': '#3e8f69', 'plotly': {'opacity': 1, 'showlegend': False}},
                'delta_ema_neg_up': {'type': 'bar', 'color': '#ff7ea4', 'plotly': {'opacity': 0.8, 'showlegend': False}},
                'delta_ema_neg_down': {'type': 'bar', 'color': '#cf3161', 'plotly': {'opacity': 1, 'showlegend': False}},
            },
            'Time': {
                'time_delta': {'color': '#333', 'plotly': {'mode': 'lines+markers', 'line': dict(color='#333', width=1), 'marker': dict(size=3), 'showlegend': False}},
            },
            "RSI": {
                'rsi_line_30': {'color': '#93B5C6', 'plotly': {'opacity': 0.8, 'showlegend': False}},
                'rsi_line_70': {'color': '#93B5C6', 'plotly': {'opacity': 0.8, 'showlegend': False}},
                'rsi_pos': {'color': '#59bf8d', 'plotly': {'mode': 'lines+markers', 'line': dict(color='#59bf8d', width=1), 'marker': dict(size=3), 'showlegend': False}},
                'rsi_neg': {'color': '#f66691', 'plotly': {'mode': 'lines+markers', 'line': dict(color='#f66691', width=1), 'marker': dict(size=3), 'showlegend': False}},
            },
            "MACD": {
                'macd': {'color': '#4b7acf', 'plotly': {'showlegend': False}},
                'macdsignal': {'color': '#FF5722', 'plotly': {'showlegend': False}},
                'macdhist_pos_up': {'type': 'bar', 'color': '#59bf8d', 'plotly': {'opacity': 0.9, 'showlegend': False}},
                'macdhist_pos_down': {'type': 'bar', 'color': '#3e8f69', 'plotly': {'opacity': 1, 'showlegend': False}},
                'macdhist_neg_up': {'type': 'bar', 'color': '#ff7ea4', 'plotly': {'opacity': 0.8, 'showlegend': False}},
                'macdhist_neg_down': {'type': 'bar', 'color': '#cf3161', 'plotly': {'opacity': 1, 'showlegend': False}},
                'macd_cross_above': {'plotly': {'mode': 'markers', 'marker': dict(color="#447adb", size=5, line=dict(color="#fff", width=1)), 'showlegend': False}},
                'macd_cross_below': {'plotly': {'mode': 'markers', 'marker': dict(color="#cf3564", size=5, line=dict(color="#fff", width=1)), 'showlegend': False}},
            },
        }


        def create_line(_range, _line):
            for val in _range:
                if 0 and val == 9*2 and _line == 'vidya':
                    plot['main_plot'][f'{_line}_{val}_up'] = {'color': 'rgb(0,255,0)', 'plotly': {'legendgroup': f'{_line}_{val}', 'showlegend': False}}
                    plot['main_plot'][f'{_line}_{val}_down'] = {'color': 'rgb(139,0,0)', 'plotly': {'legendgroup': f'{_line}_{val}', 'name': f'{_line}_{val}'}}
                else:
                    plot['main_plot'][f'{_line}_{val}_up'] = {'color': '#50d2ff', 'plotly': {'visible': 'legendonly', 'legendgroup': f'{_line}_{val}', 'showlegend': False}}
                    plot['main_plot'][f'{_line}_{val}_down'] = {'color': '#a256ef', 'plotly': {'visible': 'legendonly', 'legendgroup': f'{_line}_{val}', 'name': f'{_line}_{val}'}}

        create_line(self.ema_range, 'ema')
        create_line(self.tema_range, 'tema')
        create_line(self.wma_range, 'wma')
        create_line(self.vwma_range, 'vwma')
        create_line(self.vwap_range, 'vwap')
        create_line(self.vidya_range, 'vidya')
        create_line(self.alma_range, 'alma')

        plot['main_plot'] = {**plot['main_plot'], **{
            'HILO': {'color': 'blue', 'plotly': {'visible': 'legendonly', 'legendgroup': 'hilo', 'name': 'hilo_13'}},
            'HILOl': {'color': '#0fef59', 'plotly': {'visible': 'legendonly', 'legendgroup': 'hilo', 'showlegend': False}},
            'HILOs': {'color': '#ff66c7', 'plotly': {'visible': 'legendonly', 'legendgroup': 'hilo', 'showlegend': False}},
        }}
        create_line(self.hilo_range, 'hilo')

        for val in self.mama_range:
            plot['main_plot'][f'mama_{val}_up'] = {'color': 'rgb(0,255,0)', 'plotly': {'visible': 'legendonly', 'legendgroup': f'mama_{val}', 'showlegend': False}}
            plot['main_plot'][f'mama_{val}_down'] = {'color': 'rgb(139,0,0)', 'plotly': {'visible': 'legendonly', 'legendgroup': f'mama_{val}', 'name': f'mama_{val}'}}
            plot['main_plot'][f'fama_{val}_up'] = {'color': '#50d2ff', 'plotly': {'visible': 'legendonly', 'legendgroup': f'mama_{val}', 'showlegend': False}}
            plot['main_plot'][f'fama_{val}_down'] = {'color': '#a256ef', 'plotly': {'visible': 'legendonly', 'legendgroup': f'mama_{val}', 'showlegend': False}}

        return plot


    def populate_indicators(self, dataframe: DataFrame, metadata: dict) -> DataFrame:
        """
        Adds several different TA indicators to the given DataFrame

        Performance Note: For the best performance be frugal on the number of indicators
        you are using. Let uncomment only the indicator you are using in your strategies
        or your hyperopt configuration, otherwise you will waste your memory and CPU usage.
        :param dataframe: Dataframe with data from the exchange
        :param metadata: Additional information, like the currently traded pair
        :return: a Dataframe with all mandatory indicators for the strategies
        """

        if 'i' not in dataframe.columns: dataframe['i'] = range(len(dataframe))

        def create_line(_range, _line):
            for val in _range:
                try:
                    frame = pd.Series(self.makeline(dataframe, period=val, src=_line))
                    frames.append(pd.DataFrame({
                        f'{_line}_{val}': frame,
                        f'{_line}_{val}_up': np.select([(frame.shift(1) <= frame)], [frame], default=np.nan),
                        f'{_line}_{val}_down': np.select([(frame.shift(1) > frame)], [frame], default=np.nan),
                    }))
                except Exception:
                    pass


        if self.dp.runmode.value in ('plot'):
            frames = [dataframe]
            create_line(self.ema_range, 'ema')
            create_line(self.tema_range, 'tema')
            create_line(self.wma_range, 'wma')
            create_line(self.vwma_range, 'vwma')
            create_line(self.vwap_range, 'vwap')
            create_line(self.vidya_range, 'vidya')
            create_line(self.alma_range, 'alma')
            create_line(self.hilo_range, 'hilo')
            create_line(self.mama_range, 'mama')
            create_line(self.mama_range, 'fama')

            dataframe = pd.concat(frames, axis=1)

            supertrend = pta.supertrend(dataframe['high'], dataframe['low'], dataframe['close'], length=7, multiplier=3.0)
            dataframe['SUPERTl'] = supertrend['SUPERTl_7_3.0']
            dataframe['SUPERTs'] = supertrend['SUPERTs_7_3.0']

            macd = ta.MACD(dataframe, fastperiod=12*1, slowperiod=26*1, signalperiod=9*1)
            dataframe['macd'] = macd['macd']
            dataframe['macdsignal'] = macd['macdsignal']
            dataframe['macdhist'] = macd['macdhist']

            dataframe['rsi'] = ta.RSI(dataframe, timeperiod=14)

            hists = ['delta', 'delta_ema', 'macdhist']

            for name in hists:
                if name in dataframe.columns:
                    dataframe[f'{name}_pos_up'] = dataframe[name].where((dataframe[name] >= 0) & (dataframe[name] >= dataframe[name].shift(1)), None)
                    dataframe[f'{name}_pos_down'] = dataframe[name].where((dataframe[name] >= 0) & (dataframe[name] < dataframe[name].shift(1)), None)
                    dataframe[f'{name}_neg_up'] = dataframe[name].where((dataframe[name] < 0) & (dataframe[name] >= dataframe[name].shift(1)), None)
                    dataframe[f'{name}_neg_down'] = dataframe[name].where((dataframe[name] < 0) & (dataframe[name] < dataframe[name].shift(1)), None)


            if 'rsi' in dataframe.columns:
                dataframe['rsi_pos'] = dataframe['rsi'].where(dataframe['rsi'] >= 50, None)
                dataframe['rsi_neg'] = dataframe['rsi'].where(dataframe['rsi'] < 50, None)
                dataframe['rsi_line_30'] = 30
                dataframe['rsi_line_70'] = 70

            crosses = {
                'macd': ('macd', 'macdsignal'),
            }
            for ind, cross in crosses.items():
                if cross[0] in dataframe.columns and cross[1] in dataframe.columns:
                    dataframe[f'{ind}_cross_above'] = np.select([(
                            qtpylib.crossed_above(dataframe[cross[0]], dataframe[cross[1]])
                        )], [(dataframe[cross[0]] + dataframe[cross[0]].shift(1) + dataframe[cross[1]] + dataframe[cross[1]].shift(1)) / 4
                    ], default=np.nan)
                    dataframe[f'{ind}_cross_below'] = np.select([(
                            qtpylib.crossed_below(dataframe[cross[0]], dataframe[cross[1]])
                        )], [(dataframe[cross[0]] + dataframe[cross[0]].shift(1) + dataframe[cross[1]] + dataframe[cross[1]].shift(1)) / 4
                    ], default=np.nan)
            
            crosses = ['delta', 'delta_ema']
            for cross in crosses:
                if cross in dataframe.columns:
                    dataframe[f'{cross}_cross_above'] = np.select([(
                            (dataframe[cross] > 0) & (dataframe[cross].shift(1) <= 0)
                        )], [0], default=np.nan)
                    dataframe[f'{cross}_cross_below'] = np.select([(
                            (dataframe[cross].shift(1) >= 0) & (dataframe[cross] < 0)
                        )], [0], default=np.nan)



        return dataframe

    def populate_entry_trend(self, dataframe: DataFrame, metadata: dict) -> DataFrame:
        """
        Based on TA indicators, populates the buy signal for the given dataframe
        :param dataframe: DataFrame populated with indicators
        :param metadata: Additional information, like the currently traded pair
        :return: DataFrame with buy column
        """


            
        return dataframe


    def populate_exit_trend(self, dataframe: DataFrame, metadata: dict) -> DataFrame:
        """
        Based on TA indicators, populates the sell signal for the given dataframe
        :param dataframe: DataFrame populated with indicators
        :param metadata: Additional information, like the currently traded pair
        :return: DataFrame with buy column
        """

        return dataframe


    def populate_clusters(self, dataframe: DataFrame, metadata: dict, source='auto', chart={}) -> DataFrame:

        values = {
            'binance_futures': {
                'candlestick': {
                    'BTC/USDT': {
                        '15m': {'bid': 500,'ask': 500,'sum': 800},
                    },
                    'ETH/USDT': {
                        '15m': {'bid': 1200,'ask': 1200,'sum': 2400},
                    },
                    'ATOM/USDT': {
                        '15m': {'bid': 15000,'ask': 15000,'sum': 40000},
                    },
                    'SAND/USDT': {
                        '15m': {'bid': 100000,'ask': 100000,'sum': 180000},
                    },
                    'XRP/USDT': {
                        '15m': {'bid': 1000000,'ask': 1000000,'sum': 2000000}
                    },
                    'FTM/USDT': {
                        '15m': {'bid': 1000000,'ask': 1000000,'sum': 2000000}
                    },
                    'DOGE/USDT': {
                        '1m': {'bid': 8000000,'ask': 8000000,'sum': 2 * 8000000},
                    },
                },
                'renko_perc': {
                    'BTC/USDT': {
                        0.3: {'bid': 900,'ask': 900,'sum': 1600},
                    },
                    'ETH/USDT': {'bid': 1000,'ask': 1000,'sum': 2000},
                    'SOL/USDT': {
                        0.15: {'bid': 30000,'ask': 30000,'sum': 30000*2},
                        0.5: {'bid': 30000,'ask': 30000,'sum': 60000},
                        1.0: {'bid': 30000*4,'ask': 30000*4,'sum': 30000*8},
                    },
                    'ADA/USDT': {'bid': 1200000,'ask': 1200000,'sum': 2 * 1200000},
                    'THETA/USDT': {
                        0.3: {'bid': 100000,'ask': 100000,'sum': 2 * 100000},
                    },
                    'DOGE/USDT': {
                        0.3: {'bid': 7000000,'ask': 7000000,'sum': 2 * 7000000},
                    },
                }
            },
        }


        exchange = 'binance_futures'
        symbol = metadata['pair']
        timeframe = chart['type']
        path = '/Downloads/bidask_csv'

        if 1 or 'foot' not in dataframe.columns:
            clusters = pd.read_csv('{}/{}/{}/{}.csv'.format(path, exchange, symbol.replace('/', '_'), self.df_renko_file), parse_dates=['datetime'])
            clusters.rename(columns={'datetime': 'date', 'qty': 'volume'}, inplace=True)
            clusters['date'] = pd.to_datetime(clusters['date'], utc=True)
            dataframe.drop(['foot', 'trades'], axis=1, inplace=True, errors='ignore')
            dataframe = pd.merge(dataframe, clusters[['date', 'foot']], how='inner', on='date')

            if 1:
                dataframe.drop(['volume'], axis=1, inplace=True)
                dataframe = pd.merge(dataframe, clusters[['date', 'volume']], how='inner', on='date')
            

            dataframe.dropna(subset=['open'], inplace=True)


        if dataframe['foot'].isnull().values.any():
            print('NaN values in dataframe:')
            print(dataframe.loc[dataframe['foot'].isnull()])

            if exchange == 'poloniex' and symbol == 'SOL/USDT':
                dataframe['volume'].fillna(method='ffill', inplace=True)
        
        

        if 'i' not in dataframe.columns: dataframe['i'] = range(len(dataframe))
        self.legend_title = exchange
        dataframe['cluster_bid'] = pd.Series(dtype='object')
        dataframe['cluster_ask'] = pd.Series(dtype='object')
        dataframe['cluster_sum'] = pd.Series(dtype='object')
        dataframe['bid'], dataframe['ask'], dataframe['bid_trades'], dataframe['ask_trades'] = 0, 0, 0, 0


        # params for auto clusters detection (source='auto')
        koeff_auto = .010
        N_average_qty = 50

        poc_period = 40
        value_area_reached = 0.7 # 70%

        dfn = dataframe.to_dict(orient="records")
        delta_x = []
        l_bid_average, l_ask_average, l_bid_ask_average = [], [], []
        poc_values, poc_values_list, vah_values, val_values = [], [], [], []
        _bid, _ask, _bid_trades, _ask_trades = [], [], [], []
        _clusters_bid, _clusters_ask, _clusters_sum = [], [], []
        for i, row in enumerate(dfn):

            delta_x.append(np.nan)
            l_bid_average.append(np.nan)
            l_ask_average.append(np.nan)
            l_bid_ask_average.append(np.nan)
            poc_values.append(np.nan)
            poc_values_list.append({})
            vah_values.append(np.nan)
            val_values.append(np.nan)
            _bid.append(np.nan)
            _ask.append(np.nan)
            _bid_trades.append(np.nan)
            _ask_trades.append(np.nan)
            _clusters_bid.append(str({}))
            _clusters_ask.append(str({}))
            _clusters_sum.append(str({}))

            N = 3 # cluster sortred len
            if pd.isnull(row['foot']):
                ask_qty = []
                bid_qty = []
            else:
                foot = eval(row['foot'])
                foot_ask_sorted = sorted(foot.values(), key=itemgetter('ask_qty'), reverse=True)
                ask_qty = list(map(lambda d: d['ask_qty'], foot_ask_sorted[:N]))
                foot_bid_sorted = sorted(foot.values(), key=itemgetter('bid_qty'), reverse=True)
                bid_qty = list(map(lambda d: d['bid_qty'], foot_bid_sorted[:N]))
            if len(ask_qty) < N: ask_qty = ask_qty + [0] * (N - len(ask_qty))
            if len(bid_qty) < N: bid_qty = bid_qty + [0] * (N - len(bid_qty))
            delta_x[-1] = sum(ask_qty) - sum(bid_qty)


            if pd.isnull(row['foot']): continue
            e_foot = eval(row['foot'])


            bid, ask, bid_ask = 0, 0, 0
            poc_values_local = {}
            for price, foot in e_foot.items():
                bid += foot['bid_qty']
                ask += foot['ask_qty']
                bid_ask += foot['bid_qty'] + foot['ask_qty']
                poc_values_local[price] = foot['bid_qty'] + foot['ask_qty']
            bid_average = bid / len(e_foot)
            ask_average = ask / len(e_foot)
            bid_ask_average = bid_ask / len(e_foot)
            l_bid_average[-1] = bid_average
            l_ask_average[-1] = ask_average
            l_bid_ask_average[-1] = bid_ask_average

            poc_values_list[-1] = poc_values_local
            if i > poc_period:
                current_bar = {}
                for bar in poc_values_list[-poc_period:]:
                    for price, volume in bar.items():
                        if price not in current_bar: current_bar[price] = volume
                        else: current_bar[price] += volume

                poc_values[-1] = max(current_bar, key=current_bar.get)
                high_sum_volume, low_sum_volume = 0, 0
                for price, volume in current_bar.items():
                    if price > poc_values[-1]: high_sum_volume += volume
                    if price < poc_values[-1]: low_sum_volume += volume
                current_bar = dict(sorted(current_bar.items()))
                i = 0
                price_list = list(current_bar)
                volume_list = list(current_bar.values())
                for price, volume in current_bar.items():
                    if price == poc_values[-1]:
                        vah_values[-1] = price
                        val_values[-1] = price

                        # calc VAL
                        j = i - 1
                        sum_volume = 0
                        while j >= 0:
                            sum_volume += volume_list[j]
                            if low_sum_volume > 0 and sum_volume / low_sum_volume > value_area_reached or j == 0:
                                val_values[-1] = price_list[j]
                                break
                            j -= 1
                        
                        # calc VAH
                        j = i + 1
                        sum_volume = 0
                        while j < len(current_bar):
                            sum_volume += volume_list[j]
                            if high_sum_volume > 0 and sum_volume / high_sum_volume > value_area_reached or j == len(current_bar) - 1:
                                vah_values[-1] = price_list[j]
                                break
                            j += 1
                        
                        break
                    
                    i += 1


            clusters_bid = {}
            clusters_ask = {}
            clusters_sum = {}
            bid, ask, bid_trades, ask_trades = 0, 0, 0, 0
            for price, foot in e_foot.items():
                bid += foot['bid_qty']
                ask += foot['ask_qty']
                bid_trades += foot['bid_trades']
                ask_trades += foot['ask_trades']

                if source == 'auto' and i >= N_average_qty:
                    if bid_average > 0 and foot['bid_qty'] / sum(l_bid_average[-N_average_qty:])/N_average_qty > koeff_auto:
                        clusters_bid[price] = f"{round(foot['bid_qty'], 4)}|x{round(foot['bid_qty']*1000 / sum(l_bid_average[-N_average_qty:])/N_average_qty, 1)}"
                        if not self.cluster_range['cluster_bid']['min'] or self.cluster_range['cluster_bid']['min'] > foot['bid_qty']: self.cluster_range['cluster_bid']['min'] = foot['bid_qty']
                        if not self.cluster_range['cluster_bid']['max'] or self.cluster_range['cluster_bid']['max'] < foot['bid_qty']: self.cluster_range['cluster_bid']['max'] = foot['bid_qty']
                    if ask_average > 0 and foot['ask_qty'] / sum(l_ask_average[-N_average_qty:])/N_average_qty > koeff_auto:
                        clusters_ask[price] = f"{round(foot['ask_qty'], 4)}|x{round(foot['ask_qty']*1000 / sum(l_ask_average[-N_average_qty:])/N_average_qty, 1)}"
                        if not self.cluster_range['cluster_ask']['min'] or self.cluster_range['cluster_ask']['min'] > foot['ask_qty']: self.cluster_range['cluster_ask']['min'] = foot['ask_qty']
                        if not self.cluster_range['cluster_ask']['max'] or self.cluster_range['cluster_ask']['max'] < foot['ask_qty']: self.cluster_range['cluster_ask']['max'] = foot['ask_qty']
                if source == 'values':
                    if foot['bid_qty'] >= values[exchange][timeframe][symbol][chart['value']]['bid']:
                        clusters_bid[price] = foot['bid_qty']
                        if not self.cluster_range['cluster_bid']['min'] or self.cluster_range['cluster_bid']['min'] > foot['bid_qty']: self.cluster_range['cluster_bid']['min'] = foot['bid_qty']
                        if not self.cluster_range['cluster_bid']['max'] or self.cluster_range['cluster_bid']['max'] < foot['bid_qty']: self.cluster_range['cluster_bid']['max'] = foot['bid_qty']
                    if foot['ask_qty'] >= values[exchange][timeframe][symbol][chart['value']]['ask']:
                        clusters_ask[price] = foot['ask_qty']
                        if not self.cluster_range['cluster_ask']['min'] or self.cluster_range['cluster_ask']['min'] > foot['ask_qty']: self.cluster_range['cluster_ask']['min'] = foot['ask_qty']
                        if not self.cluster_range['cluster_ask']['max'] or self.cluster_range['cluster_ask']['max'] < foot['ask_qty']: self.cluster_range['cluster_ask']['max'] = foot['ask_qty']
                    if foot['bid_qty'] + foot['ask_qty'] >= values[exchange][timeframe][symbol][chart['value']]['sum']:
                        clusters_sum[price] = foot['bid_qty'] + foot['ask_qty']
                        if not self.cluster_range['cluster_sum']['min'] or self.cluster_range['cluster_sum']['min'] > clusters_sum[price]: self.cluster_range['cluster_sum']['min'] = clusters_sum[price]
                        if not self.cluster_range['cluster_sum']['max'] or self.cluster_range['cluster_sum']['max'] < clusters_sum[price]: self.cluster_range['cluster_sum']['max'] = clusters_sum[price]
            
            if len(clusters_bid): _clusters_bid[-1] = str(clusters_bid)
            if len(clusters_ask): _clusters_ask[-1] = str(clusters_ask)
            if len(clusters_sum): _clusters_sum[-1] = str(clusters_sum)
            _bid[-1] = bid
            _ask[-1] = ask
            _bid_trades[-1] = bid_trades
            _ask_trades[-1] = ask_trades
        
        
        assert len(dataframe) == len(delta_x)
        dataframe['delta_x'] = delta_x

        assert len(dataframe) == len(_clusters_bid) == len(_clusters_ask) == len(_clusters_sum)
        dataframe['cluster_bid'] = _clusters_bid
        dataframe['cluster_ask'] = _clusters_ask
        dataframe['cluster_sum'] = _clusters_sum

        assert len(dataframe) == len(_bid) == len(_ask) == len(_bid_trades) == len(_ask_trades)
        dataframe['bid'] = _bid
        dataframe['ask'] = _ask
        dataframe['bid_trades'] = _bid_trades
        dataframe['ask_trades'] = _ask_trades

        dataframe['delta'] = dataframe['ask'] - dataframe['bid']
        dataframe['cumulative_delta'] = dataframe['delta'].cumsum() / 10000
        dataframe['delta_bid'] = self.makeline(pd.Series({'close': dataframe['bid']}), period=9, src='ema')
        dataframe['delta_ask'] = self.makeline(pd.Series({'close': dataframe['ask']}), period=9, src='ema')
        dataframe['delta_ema'] = dataframe['delta_ask'] - dataframe['delta_bid']

        assert len(dataframe) == len(poc_values) == len(vah_values) == len(val_values)
        dataframe['POC'] = poc_values
        dataframe['VAH'] = vah_values
        dataframe['VAL'] = val_values

        return dataframe
    
    

    def convert_to_renko(self, dataframe: DataFrame, metadata: dict, chart: dict) -> DataFrame:

        path = '/Downloads/bidask_csv'
        
        if chart['type'] == 'renko_perc': self.df_renko_file = f'df_renko_perc_brick={chart["value"]}'
        elif chart['type'] == 'candlestick': self.df_renko_file = f'df_candlestick_interval={chart["value"]}'

        renko = pd.read_csv('{}/binance_futures/{}/{}.csv'.format(path, metadata['pair'].replace('/', '_'), self.df_renko_file), parse_dates=['datetime'])
        renko.rename(columns={'datetime': 'date', 'qty': 'volume'}, inplace=True)
        renko['date'] = pd.to_datetime(renko['date'], utc=True)
        start_date = dataframe.iloc[0]['date'] - relativedelta(days=2)
        renko = renko.loc[(renko['date'] >= start_date) & (renko['date'] <= dataframe.iloc[-1]['date'])]
        renko.reset_index(drop=True, inplace=True)
        renko['time_delta'] = (renko['date']-renko['date'].shift(1)).shift(-1).dt.seconds

        return renko
    

    def makeline(self, df, period=14, src='ema', column='close'):

        if column == 'close': masrc = df['close']
        elif column == 'hl2': masrc = pta.hl2(df['high'], df['low'])
        elif column == 'hlc3': masrc =  pta.hlc3(df['high'], df['low'], df['close'])

        if src not in ['mama', 'fama']:
            period = int(period)
        else:
            while period > 3:
                period /= 2
            if period < 1: period = 1.0

        if src == 'sma':
            col = ta.SMA(masrc, timeperiod=period)
        if src == 'ema':
            col = ta.EMA(masrc, timeperiod=period)
        if src == 'dema':
            col = ta.DEMA(masrc, timeperiod=period)
        if src == 'tema':
            col = ta.TEMA(masrc, timeperiod=period)
        if src == 'zema':
            col = zema(df, period=period)
        if src == 'wma':
            col = ta.WMA(df, timeperiod=period)
        if src == 'vwma':
            col = vwma(df, period)
        if src == 'vwap':
            col = qtpylib.rolling_vwap(df, window=period)
        if src == 'vidya':
            while period > 80:
                period = int(period * .33)
            col = VIDYA(df, length=period)
        if src == 'hwma':
            col = pta.hwma(masrc, na=0.2, nb=0.1, nc=0.1)
        if src == 'kama':
            col = pta.kama(masrc, length=period, fast=2, slow=30)
        if src == 'mcgd':
            col = pta.mcgd(masrc, length=period, c=1.0)
        if src == 'rma':
            col = pta.rma(masrc, length=period)
        if src == 'sinwma':
            col = pta.sinwma(masrc, length=period)
        if src == 'hilo':
            hilo = pta.hilo(df['high'], df['low'], df['close'], high_length=period, low_length=int(period*1.8), mamode='sma')
            col = hilo[f'HILO_{period}_{int(period*1.8)}']
        #----------------
        if src == 'alma':
            col = pta.alma(masrc, length=period, sigma=6.0, distribution_offset=0.85)
        if src == 'hma':
            col = pta.hma(masrc, length=period)
        if src == 'jma':
            col = pta.jma(masrc, length=period, phase=0.0)
        if src == 'linreg':
            col = pta.linreg(masrc, length=period)
        if src == 'ssf':
            col = pta.ssf(masrc, length=period, poles=2)
        if src == 'swma':
            col = pta.swma(masrc, length=period, asc=True)
        if src == 'trima':
            col = pta.trima(masrc, length=period)
        if src == 'fwma':
            col = pta.fwma(masrc, length=period, asc=True)
        if src == 'mama':
            mama, fama = ta.MAMA(masrc, fastlimit=0.5/period, slowlimit=0.05/period)
            col = mama
        if src == 'fama':
            mama, fama = ta.MAMA(masrc, fastlimit=0.5/period, slowlimit=0.05/period)
            col = fama

        return col

POC.py:

import sys
from pandas import DataFrame

sys.path.append('/freq-strategies')
from Templategit import Templategit

class POC(Templategit):

    minimal_roi = {
        "0": 1
    }
    stoploss = -1 / 100
    trailing_stop = False
    startup_candle_count: int = 3

    @property
    def plot_config(self):
        plot = Templategit.plot_config.fget(self)
        plot['main_plot']['VAH'] = {
            'color': 'rgba(0,0,0,0)',
            'fill_to': 'VAL',
            'fill_label': 'VAH/VAL',
            'fill_color': 'rgba(255,0,0,0.25)',
            'plotly': {'visible': True},
        }
        plot['main_plot']['POC'] = {'color': 'cyan'}
        del plot['subplots']['Cum delta']
        return plot

    def populate_indicators(self, dataframe: DataFrame, metadata: dict) -> DataFrame:
        chart = {'type': 'renko_perc', 'value': 0.3}
        dataframe = self.convert_to_renko(dataframe, metadata, chart)
        dataframe = self.populate_clusters(dataframe, metadata, source='values', chart=chart)
        dataframe = super().populate_indicators(dataframe, metadata)
        return dataframe

    def populate_entry_trend(self, dataframe: DataFrame, metadata: dict) -> DataFrame:
        dataframe = super().populate_entry_trend(dataframe, metadata)
        return dataframe

    def populate_exit_trend(self, dataframe: DataFrame, metadata: dict) -> DataFrame:
        dataframe = super().populate_exit_trend(dataframe, metadata)
        return dataframe

Modified freqtrade/plot/plotting.py for clusters visualization and some other my features (tested on freqtrade v2022.11):

import logging
from pathlib import Path
from typing import Any, Dict, List, Optional

import pandas as pd
import pickle

from freqtrade.configuration import TimeRange
from freqtrade.data.btanalysis import (analyze_trade_parallelism, extract_trades_of_period,
                                       load_trades)
from freqtrade.data.converter import trim_dataframe
from freqtrade.data.dataprovider import DataProvider
from freqtrade.data.history import get_timerange, load_data
from freqtrade.data.metrics import (calculate_max_drawdown, calculate_underwater,
                                    combine_dataframes_with_mean, create_cum_profit)
from freqtrade.enums import CandleType
from freqtrade.exceptions import OperationalException
from freqtrade.exchange import timeframe_to_prev_date, timeframe_to_seconds
from freqtrade.misc import pair_to_filename
from freqtrade.plugins.pairlist.pairlist_helpers import expand_pairlist
from freqtrade.resolvers import ExchangeResolver, StrategyResolver
from freqtrade.strategy import IStrategy


logger = logging.getLogger(__name__)


try:
    import plotly.graph_objects as go
    from plotly.offline import plot
    from plotly.subplots import make_subplots
except ImportError:
    logger.exception("Module plotly not found \n Please install using `pip3 install plotly`")
    exit(1)


def init_plotscript(config, markets: List, startup_candles: int = 0):
    """
    Initialize objects needed for plotting
    :return: Dict with candle (OHLCV) data, trades and pairs
    """

    if "pairs" in config:
        pairs = expand_pairlist(config['pairs'], markets)
    else:
        pairs = expand_pairlist(config['exchange']['pair_whitelist'], markets)

    # Set timerange to use
    timerange = TimeRange.parse_timerange(config.get('timerange'))

    data = load_data(
        datadir=config.get('datadir'),
        pairs=pairs,
        timeframe=config['timeframe'],
        timerange=timerange,
        startup_candles=startup_candles,
        data_format=config.get('dataformat_ohlcv', 'json'),
        candle_type=config.get('candle_type_def', CandleType.SPOT)
    )

    if startup_candles and data:
        min_date, max_date = get_timerange(data)
        logger.info(f"Loading data from {min_date} to {max_date}")
        timerange.adjust_start_if_necessary(timeframe_to_seconds(config['timeframe']),
                                            startup_candles, min_date)

    no_trades = False
    filename = config.get("exportfilename")
    if config.get("no_trades", False):
        no_trades = True
    elif config['trade_source'] == 'file':
        if not filename.is_dir() and not filename.is_file():
            logger.warning("Backtest file is missing skipping trades.")
            no_trades = True
    try:
        trades = load_trades(
            config['trade_source'],
            db_url=config.get('db_url'),
            exportfilename=filename,
            no_trades=no_trades,
            strategy=config.get('strategy'),
        )
    except ValueError as e:
        raise OperationalException(e) from e
    if not trades.empty:
        trades = trim_dataframe(trades, timerange, 'open_date')

    return {"ohlcv": data,
            "trades": trades,
            "pairs": pairs,
            "timerange": timerange,
            }


def add_indicators(fig, row, indicators: Dict[str, Dict], data: pd.DataFrame) -> make_subplots:
    """
    Generate all the indicators selected by the user for a specific row, based on the configuration
    :param fig: Plot figure to append to
    :param row: row number for this plot
    :param indicators: Dict of Indicators with configuration options.
                       Dict key must correspond to dataframe column.
    :param data: candlestick DataFrame
    """
    plot_kinds = {
        'scatter': go.Scatter,
        'bar': go.Bar,
    }

    if 'cluster_sum' in data and row == 1:
        sizes = [10, 40]
        colors = {'cluster_bid': 'rgba(233,30,99,.99)', 'cluster_ask': 'rgba(88,120,20,.99)', 'cluster_sum': 'rgba(255,0,255,.99)'}
        first = {'cluster_bid': True, 'cluster_ask': True, 'cluster_sum': True}
        dfn = data.to_dict(orient="records")
        for i, _row in enumerate(dfn):
            for cl in ['cluster_bid', 'cluster_ask', 'cluster_sum']:
                if pd.isnull(_row[cl]): continue
                for price, qty in eval(_row[cl]).items():
                    min_ = indicators['cluster_range'][cl]['min']
                    if indicators['cluster_range'][cl]['max'] == indicators['cluster_range'][cl]['min']: min_ = 0
                    qty_ = qty if type(qty) != str else float(qty.split('|')[0])
                    fig.add_trace(go.Scatter(
                            x = [i],
                            y = [price], 
                            mode='markers',
                            name=cl,
                            text=qty if type(qty) == str else round(qty, 2),
                            showlegend=True if first[cl] else False,
                            visible='legendonly' if cl == 'cluster_sum' else True,
                            legendgroup=cl,
                            marker=dict(
                                symbol='square',
                                color=colors[cl].replace('.99', '.25'),
                                size=int((100 * (qty_ - min_) / (indicators['cluster_range'][cl]['max'] - min_)) * (sizes[1] - sizes[0]) / 100 + sizes[0]),
                                line=dict(color=colors[cl],width=2)
                            ),
                        ), row=row, col=1)
                    first[cl] = False

    for indicator, conf in indicators.items():
        logger.debug(f"indicator {indicator} with config {conf}")
        if indicator == 'levels' and conf:
            for j, level in enumerate(conf):
                for i, le in enumerate(level['verts']):
                    if i > 0:
                        fig.append_trace(go.Scatter(
                            x = [level['verts'][i-1]['date'], level['verts'][i]['date']],
                            y = [level['verts'][0]['high'], level['verts'][0]['high']], 
                            mode='lines',
                            name='level',
                            showlegend=False,
                            visible='legendonly',
                            legendgroup='1',
                            line=dict(color='#444', width=1) if level['verts'][i]['extreme'] else dict(color='#aaa', dash='7px', width=1)
                        ), row=row, col=1)
                    color = '#444' if i==0 else '#aaa'
                    if not level['verts'][i]['extreme']: color = '#ff99dd'
                    fig.add_trace(go.Scatter(
                        x = [le['date']],
                        y = [level['verts'][0]['high']],
                        mode='markers',
                        marker_symbol=0 if level['verts'][i]['extreme'] else 0,
                        showlegend=True if j == 0 and i == 1 else False,
                        #showlegend=False,
                        visible='legendonly',
                        name='levels',
                        legendgroup='1',
                        line=dict(color=color, width=3)
                    ))
            continue
        if indicator in data:
            kwargs = {'x': list(range(len(data))),
                      #'x': data['date'],
                      'y': data[indicator].values,
                      'name': indicator
                      }

            plot_type = conf.get('type', 'scatter')
            color = conf.get('color')
            if plot_type == 'bar':
                kwargs.update({'marker_color': color or 'DarkSlateGrey',
                               'marker_line_color': color or 'DarkSlateGrey'})
            else:
                if color:
                    kwargs.update({'line': {'color': color}})
                kwargs['mode'] = 'lines'
                if plot_type != 'scatter':
                    logger.warning(f'Indicator {indicator} has unknown plot trace kind {plot_type}'
                                   f', assuming "scatter".')

            kwargs.update(conf.get('plotly', {}))
            trace = plot_kinds[plot_type](**kwargs)
            fig.add_trace(trace, row, 1)
        else:
            logger.info(
                'Indicator "%s" ignored. Reason: This indicator is not found '
                'in your strategy.',
                indicator
            )

    return fig


def add_profit(fig, row, data: pd.DataFrame, column: str, name: str) -> make_subplots:
    """
    Add profit-plot
    :param fig: Plot figure to append to
    :param row: row number for this plot
    :param data: candlestick DataFrame
    :param column: Column to use for plot
    :param name: Name to use
    :return: fig with added profit plot
    """
    profit = go.Scatter(
        x=data.index,
        y=data[column],
        name=name,
    )
    fig.add_trace(profit, row, 1)

    return fig


def add_max_drawdown(fig, row, trades: pd.DataFrame, df_comb: pd.DataFrame,
                     timeframe: str, starting_balance: float) -> make_subplots:
    """
    Add scatter points indicating max drawdown
    """
    try:
        _, highdate, lowdate, _, _, max_drawdown = calculate_max_drawdown(
            trades,
            starting_balance=starting_balance
        )

        drawdown = go.Scatter(
            x=[highdate, lowdate],
            y=[
                df_comb.loc[timeframe_to_prev_date(timeframe, highdate), 'cum_profit'],
                df_comb.loc[timeframe_to_prev_date(timeframe, lowdate), 'cum_profit'],
            ],
            mode='markers',
            name=f"Max drawdown {max_drawdown:.2%}",
            text=f"Max drawdown {max_drawdown:.2%}",
            marker=dict(
                symbol='square-open',
                size=9,
                line=dict(width=2),
                color='green'

            )
        )
        fig.add_trace(drawdown, row, 1)
    except ValueError:
        logger.warning("No trades found - not plotting max drawdown.")
    return fig


def add_underwater(fig, row, trades: pd.DataFrame, starting_balance: float) -> make_subplots:
    """
    Add underwater plots
    """
    try:
        underwater = calculate_underwater(
            trades,
            value_col="profit_abs",
            starting_balance=starting_balance
        )

        underwater_plot = go.Scatter(
            x=underwater['date'],
            y=underwater['drawdown'],
            name="Underwater Plot",
            fill='tozeroy',
            fillcolor='#cc362b',
            line={'color': '#cc362b'}
        )

        underwater_plot_relative = go.Scatter(
            x=underwater['date'],
            y=(-underwater['drawdown_relative']),
            name="Underwater Plot (%)",
            fill='tozeroy',
            fillcolor='green',
            line={'color': 'green'}
        )

        fig.add_trace(underwater_plot, row, 1)
        fig.add_trace(underwater_plot_relative, row + 1, 1)
    except ValueError:
        logger.warning("No trades found - not plotting underwater plot")
    return fig


def add_parallelism(fig, row, trades: pd.DataFrame, timeframe: str) -> make_subplots:
    """
    Add Chart showing trade parallelism
    """
    try:
        result = analyze_trade_parallelism(trades, timeframe)

        drawdown = go.Scatter(
            x=result.index,
            y=result['open_trades'],
            name="Parallel trades",
            fill='tozeroy',
            fillcolor='#242222',
            line={'color': '#242222'},
        )
        fig.add_trace(drawdown, row, 1)
    except ValueError:
        logger.warning("No trades found - not plotting Parallelism.")
    return fig


def plot_trades(fig, data: pd.DataFrame, trades: pd.DataFrame) -> make_subplots:
    """
    Add trades to "fig"
    """
    # Trades can be empty
    if trades is not None and len(trades) > 0:
        # Create description for exit summarizing the trade
        trades['desc'] = trades.apply(
            lambda row: f"{row['profit_ratio']:.2%}, " +
            (f"{row['enter_tag']}, " if row['enter_tag'] is not None else "") +
            f"{row['exit_reason']}, " +
            f"{row['trade_duration']} min",
            axis=1)

        data['date'] = pd.to_datetime(data['date'].dt.strftime('%Y-%m-%d %H:%M:%S'), utc=True)
        data['open_date'] = data['date']
        me = pd.merge(trades, data[['open_date', 'i']], how='inner', on='open_date')
        trade_entries = go.Scatter(
            #x=trades["open_date"],
            x=me["i"],
            y=trades["open_rate"],
            mode='markers',
            name='Trade entry ({})'.format(len(trades["open_date"])),
            text=trades["desc"],
            marker=dict(
                symbol='circle-open',
                size=14,
                line=dict(width=3),
                color='cyan'

            )
        )

        data['close_date'] = data['date']
        me = pd.merge(trades, data[['close_date', 'i']], how='inner', on='close_date')
        trade_exits = go.Scatter(
            #x=trades.loc[trades['profit_ratio'] > 0, "close_date"],
            x=me.loc[me['profit_ratio'] > 0, "i"],
            y=trades.loc[trades['profit_ratio'] > 0, "close_rate"],
            text=trades.loc[trades['profit_ratio'] > 0, "desc"],
            mode='markers',
            name='Exit - Profit ({})'.format(len(trades.loc[trades['profit_ratio'] > 0])),
            marker=dict(
                symbol='square-open',
                size=14,
                line=dict(width=3),
                color='#10c5f5'
            )
        )
        trade_exits_loss = go.Scatter(
            #x=trades.loc[trades['profit_ratio'] <= 0, "close_date"],
            x=me.loc[me['profit_ratio'] <= 0, "i"],
            y=trades.loc[trades['profit_ratio'] <= 0, "close_rate"],
            text=trades.loc[trades['profit_ratio'] <= 0, "desc"],
            mode='markers',
            name='Exit - Loss ({})'.format(len(trades.loc[trades['profit_ratio'] <= 0])),
            marker=dict(
                symbol='square-open',
                size=14,
                line=dict(width=3),
                color='#9d0fdf'
            )
        )
        fig.add_trace(trade_entries, 1, 1)
        fig.add_trace(trade_exits, 1, 1)
        fig.add_trace(trade_exits_loss, 1, 1)
    else:
        logger.warning("No trades found.")
    return fig


def create_plotconfig(indicators1: List[str], indicators2: List[str],
                      plot_config: Dict[str, Dict]) -> Dict[str, Dict]:
    """
    Combines indicators 1 and indicators 2 into plot_config if necessary
    :param indicators1: List containing Main plot indicators
    :param indicators2: List containing Sub plot indicators
    :param plot_config: Dict of Dicts containing advanced plot configuration
    :return: plot_config - eventually with indicators 1 and 2
    """

    if plot_config:
        if indicators1:
            plot_config['main_plot'] = {ind: {} for ind in indicators1}
        if indicators2:
            plot_config['subplots'] = {'Other': {ind: {} for ind in indicators2}}

    if not plot_config:
        # If no indicators and no plot-config given, use defaults.
        if not indicators1:
            indicators1 = ['sma', 'ema3', 'ema5']
        if not indicators2:
            indicators2 = ['macd', 'macdsignal']

        # Create subplot configuration if plot_config is not available.
        plot_config = {
            'main_plot': {ind: {} for ind in indicators1},
            'subplots': {'Other': {ind: {} for ind in indicators2}},
        }
    if 'main_plot' not in plot_config:
        plot_config['main_plot'] = {}

    if 'subplots' not in plot_config:
        plot_config['subplots'] = {}
    return plot_config


def plot_area(fig, row: int, data: pd.DataFrame, indicator_a: str,
              indicator_b: str, label: str = "",
              fill_color: str = "rgba(0,176,246,0.2)",
              plotly: dict = {}) -> make_subplots:
    """ Creates a plot for the area between two traces and adds it to fig.
    :param fig: Plot figure to append to
    :param row: row number for this plot
    :param data: candlestick DataFrame
    :param indicator_a: indicator name as populated in strategy
    :param indicator_b: indicator name as populated in strategy
    :param label: label for the filled area
    :param fill_color: color to be used for the filled area
    :return: fig with added  filled_traces plot
    """
    if indicator_a in data and indicator_b in data:
        # make lines invisible to get the area plotted, only.
        line = {'color': 'rgba(255,255,255,0)'}
        # TODO: Figure out why scattergl causes problems plotly/plotly.js#2284
        trace_a = go.Scatter(x=list(range(len(data))), y=data[indicator_a],
        #trace_a = go.Scatter(x=data.date, y=data[indicator_a],
                             showlegend=False,
                             line=line)
        trace_b = go.Scatter(x=list(range(len(data))), y=data[indicator_b], name=label,
        #trace_b = go.Scatter(x=data.date, y=data[indicator_b], name=label,
                             fill="tonexty", fillcolor=fill_color,
                             line=line, **plotly)
        fig.add_trace(trace_a, row, 1)
        fig.add_trace(trace_b, row, 1)
    return fig


def add_areas(fig, row: int, data: pd.DataFrame, indicators) -> make_subplots:
    """ Adds all area plots (specified in plot_config) to fig.
    :param fig: Plot figure to append to
    :param row: row number for this plot
    :param data: candlestick DataFrame
    :param indicators: dict with indicators. ie.: plot_config['main_plot'] or
                            plot_config['subplots'][subplot_label]
    :return: fig with added  filled_traces plot
    """
    for indicator, ind_conf in indicators.items():
        if ind_conf and 'fill_to' in ind_conf:
            indicator_b = ind_conf['fill_to']
            if indicator in data and indicator_b in data:
                label = ind_conf.get('fill_label',
                                     f'{indicator}<>{indicator_b}')
                fill_color = ind_conf.get('fill_color', 'rgba(0,176,246,0.2)')
                fig = plot_area(fig, row, data, indicator, indicator_b,
                                label=label, fill_color=fill_color, plotly=ind_conf.get('plotly'))
            elif indicator not in data:
                logger.info(
                    'Indicator "%s" ignored. Reason: This indicator is not '
                    'found in your strategy.', indicator
                )
            elif indicator_b not in data:
                logger.info(
                    'fill_to: "%s" ignored. Reason: This indicator is not '
                    'in your strategy.', indicator_b
                )
    return fig


def create_scatter(
    data,
    column_name,
    color,
    direction
) -> Optional[go.Scatter]:

    if column_name in data.columns:
        df_short = data[data[column_name] == 1]
        if len(df_short) > 0:
            shorts = go.Scatter(
                #x=df_short.date,
                x=df_short.i,
                y=df_short.close,
                mode='markers',
                name='{} ({})'.format(column_name, len(df_short)),
                marker=dict(
                    symbol=f"triangle-{direction}-dot",
                    size=9,
                    line=dict(width=1),
                    color=color,
                )
            )
            return shorts
        else:
            logger.warning(f"No {column_name}-signals found.")

    return None


def generate_candlestick_graph(pair: str, data: pd.DataFrame, trades: pd.DataFrame = None, *,
                               indicators1: List[str] = [],
                               indicators2: List[str] = [],
                               plot_config: Dict[str, Dict] = {},
                               config = {}
                               ) -> go.Figure:
    """
    Generate the graph from the data generated by Backtesting or from DB
    Volume will always be ploted in row2, so Row 1 and 3 are to our disposal for custom indicators
    :param pair: Pair to Display on the graph
    :param data: OHLCV DataFrame containing indicators and entry/exit signals
    :param trades: All trades created
    :param indicators1: List containing Main plot indicators
    :param indicators2: List containing Sub plot indicators
    :param plot_config: Dict of Dicts containing advanced plot configuration
    :return: Plotly figure
    """

    data['i'] = range(len(data))

    if 'subplots' in plot_config:
        for i, label in enumerate(list(plot_config['subplots'])[:]):
            sub_config = plot_config['subplots'][label]
            found = False
            for k, v in sub_config.items():
                if k in data.columns:
                    found = True
                    break
            if not found:
                del plot_config['subplots'][label]

    plot_config = create_plotconfig(indicators1, indicators2, plot_config)
    rows = 2 + len(plot_config['subplots'])
    #row_widths = [1 for _ in plot_config['subplots']]
    row_widths = []
    for key in reversed(plot_config['subplots'].keys()):
        if key == 'ADX+DI': row_widths.append(1.0)
        elif 'Heik' in key: row_widths.append(0.50)
        elif 'Cum delta' in key: row_widths.append(3.0)
        else: row_widths.append(0.75)
    # Define the graph
    fig = make_subplots(
        rows=rows,
        cols=1,
        shared_xaxes=True,
        #row_width=row_widths + [1, 4],
        row_width=row_widths + [0.5,5],
        vertical_spacing=0.0001,
    )
    fig['layout'].update(title=pair)
    fig['layout']['yaxis1'].update(title='Price')
    fig['layout']['yaxis2'].update(title='Volume')
    for i, name in enumerate(plot_config['subplots']):
        fig['layout'][f'yaxis{3 + i}'].update(title=name)
    fig['layout']['xaxis']['rangeslider'].update(visible=False)
    fig.update_layout(modebar_add=["v1hovermode", "toggleSpikeLines"])

    fig.update_layout(margin=dict(l=0, r=0, t=0, b=0), dragmode='pan', spikedistance=-1, xaxis_rangeslider_visible=False)
    fig.update_xaxes(showspikes=True, showticklabels=False, spikemode='across', spikesnap='cursor', showline=True, showgrid=True, spikethickness=1, spikecolor='#333')
    ##fig.update_xaxes(row=2, col=1)
    #fig.update_layout(hovermode="x")
    fig.update_yaxes(showspikes=True, spikemode='across', spikesnap='cursor', showline=True, showgrid=True, spikethickness=1, spikecolor='#333')
    ##fig.update_yaxes(automargin=True)

    dfn = data.to_dict(orient="records")

    # Common information
    candles = go.Candlestick(
        #x=data.date,
        x=list(range(len(data))),
        open=data.open,
        high=data.high,
        low=data.low,
        close=data.close,
        text=data.date.apply(lambda row: "{:%Y-%m-%d %H:%M:%S}".format(row)),
        name='Price'
    )
    fig.add_trace(candles, 1, 1)

    longs = create_scatter(data, 'enter_long', '#f6ff08', 'up')
    exit_longs = create_scatter(data, 'exit_long', '#ff79fe', 'down')
    shorts = create_scatter(data, 'enter_short', 'blue', 'down')
    exit_shorts = create_scatter(data, 'exit_short', 'violet', 'up')

    for scatter in [longs, exit_longs, shorts, exit_shorts]:
        if scatter:
            fig.add_trace(scatter, 1, 1)

    # # Add Bollinger Bands
    # fig = plot_area(fig, 1, data, 'bb_lowerband', 'bb_upperband',
    #                 label="Bollinger Band")
    # # prevent bb_lower and bb_upper from plotting
    # try:
    #     del plot_config['main_plot']['bb_lowerband']
    #     del plot_config['main_plot']['bb_upperband']
    # except KeyError:
    #     pass

    if 'stoploss' in data.columns:
        stoploss = go.Scatter(
            #x=data["date"],
            x=list(range(len(data))),
            y=data["stoploss"],
            mode='markers',
            name='stoploss',
            #marker_line_color="midnightblue", marker_color="lightskyblue",
            text=data.apply(lambda row: f"{-1 * abs(1 - row['close'] * 1 / row['stoploss']):.2%}", axis=1),
            marker=dict(
                symbol='x',
                size=8,
                line=dict(color='midnightblue', width=1),
                color='lightskyblue'

            )
        )
        fig.add_trace(stoploss, 1, 1)

    if 'reward' in data.columns:
        flags = [False, False, False]
        for i, row in enumerate(dfn):
            if row['reward'] < 0: name = 'reward -'
            if row['reward'] == 0: name = 'reward 0'
            if row['reward'] > 0: name = 'reward +'
            fig.add_trace(go.Scatter(
                x = [i],
                y = [row['low']],
                mode='text',
                text=round(row['reward'] * 100, 2),
                marker=dict(color="brown", size=6),
                showlegend=True if row['reward'] < 0 and not flags[0] or row['reward'] == 0 and not flags[1] or row['reward'] > 0 and not flags[2] else False,
                visible=True if row['reward'] > 0 else 'legendonly',
                name=name,
                legendgroup=name,
            ), 1, 1)
            if row['reward'] < 0: flags[0] = True
            if row['reward'] == 0: flags[1] = True
            if row['reward'] > 0: flags[2] = True
            #f'pivotlow_{self.buy_pivot_period.value}': {'plotly': {'mode': 'text', 'marker': dict(color="brown", size=6), 'text': 'L', 'legendgroup': 'pivots', 'showlegend': False}}
    
    if 'label' in data.columns:
        flag = False
        for i, row in enumerate(dfn):
            if not pd.isnull(row['label']):
                fig.add_trace(go.Scatter(
                    x = [i],
                    y = [row['low']],
                    mode='text',
                    text=row['label'],
                    marker=dict(color="brown", size=6),
                    showlegend=True if not flag else False,
                    visible='legendonly',
                    name='label',
                    legendgroup='label',
                ), 1, 1)
                flag = True

    # main plot goes to row 1
    fig = add_areas(fig, 1, data, plot_config['main_plot'])
    fig = add_indicators(fig=fig, row=1, indicators=plot_config['main_plot'], data=data)
    fig = plot_trades(fig, data, trades)
    # sub plot: Volume goes to row 2
    volume = go.Bar(
        #x=data['date'],
        x=list(range(len(data))),
        y=data['volume'],
        name='Volume',
        showlegend=False,
        marker_color='DarkSlateGrey',
        marker_line_color='DarkSlateGrey'
    )
    fig.add_trace(volume, 2, 1)
    # add each sub plot to a separate row
    for i, label in enumerate(plot_config['subplots']):
        sub_config = plot_config['subplots'][label]
        row = 3 + i
        fig = add_indicators(fig=fig, row=row, indicators=sub_config,
                             data=data)
        # fill area between indicators ( 'fill_to': 'other_indicator')
        fig = add_areas(fig, row, data, sub_config)
    


    fig.update_traces(xaxis="x1")

    # if config.get('timerange', None):
    #     timerange = config.get('timerange').split('-')
    #     if timerange[1] != '':
    #         t1 = timerange[0][:6] + '-' + timerange[0][6:]
    #         t1 = t1[:4] + '-' + t1[4:]
    #         t2 = timerange[1][:6] + '-' + timerange[1][6:]
    #         t2 = t2[:4] + '-' + t2[4:]
    #         fig.update_xaxes(type="date", range=[t1, t2])

    # if config['timeframe'] != '1d':
    #     for d in data[(data['date'].dt.hour == 0) & (data['date'].dt.minute == 0)]['date']:
    #         fig.add_vline(x=d, line_width=1, line_dash="dash", line_color="#000", opacity=0.35)
    for i, row in enumerate(dfn):
        if i == 0: continue
        if row['date'].strftime('%Y-%m-%d') != dfn[i - 1]['date'].strftime('%Y-%m-%d'):
            fig.add_vline(x=i, line_width=1, line_dash="dash", line_color="#000", opacity=0.35)
    
    # fig.update_layout(legend=dict(
    #     orientation="h",
    #     yanchor="bottom",
    #     y=1.02,
    #     xanchor="right",
    #     x=1
    # ))
    fig.update_layout(
        legend=dict(
            # x=0,
            # y=1,
            traceorder="reversed",
            font=dict(
                size=12,
            ),
        )
    )

    fig.update_layout(legend_title_text=plot_config['main_plot']['legend_title'])
    #fig.update_layout(legend_title_text='test')

    if 0:
        with open('/Users/fridary/Downloads/adjust_trade_position.pkl', 'rb') as f:
            martingale = pickle.load(f)

        martingale = pd.DataFrame(martingale)
        martingale = martingale.loc[(martingale['current_time'] >= dfn[0]['date']) & (martingale['current_time'] <= dfn[-1]['date'])]
        if len(martingale):
            print()
            print(martingale)
            print()
            martingale.rename(columns={'current_time': 'date'}, inplace=True)
            ma = pd.merge(martingale, data[['date', 'i']], how='inner', on='date')
            ma['desc'] = ma.apply(
                    lambda row: f"#{row['count_of_entries']}, " +
                    #f"total_profit={round(row['current_profit']*100,2)}%, " +
                    f"profit_from_start={round(row['profit_from_start'],2)}%, " +
                    f"profit_from_last_order={round(row['profit_from_last_order'],2)}%, " +
                    f"stake_amount={round(row['stake_amount'],2)}",
                    axis=1)
            fig.add_trace(go.Scatter(
                x = ma['i'],
                y = ma['current_rate'],
                text=ma['desc'],
                mode='markers',
                marker=dict(color="#e845f9", size=8, line=dict(color="#fff", width=2)),
                showlegend=True,
                visible=True,
                name='martingale',
                legendgroup='martingale',
            ), 1, 1)


    return fig


def generate_profit_graph(pairs: str, data: Dict[str, pd.DataFrame],
                          trades: pd.DataFrame, timeframe: str, stake_currency: str,
                          starting_balance: float) -> go.Figure:
    # Combine close-values for all pairs, rename columns to "pair"
    try:
        df_comb = combine_dataframes_with_mean(data, "close")
    except ValueError:
        raise OperationalException(
            "No data found. Please make sure that data is available for "
            "the timerange and pairs selected.")

    # Trim trades to available OHLCV data
    trades = extract_trades_of_period(df_comb, trades, date_index=True)
    if len(trades) == 0:
        raise OperationalException('No trades found in selected timerange.')

    # Add combined cumulative profit
    df_comb = create_cum_profit(df_comb, trades, 'cum_profit', timeframe)

    # Plot the pairs average close prices, and total profit growth
    avgclose = go.Scatter(
        x=df_comb.index,
        y=df_comb['mean'],
        name='Avg close price',
    )

    fig = make_subplots(rows=6, cols=1, shared_xaxes=True,
                        row_heights=[1, 1, 1, 0.5, 0.75, 0.75],
                        vertical_spacing=0.05,
                        subplot_titles=[
                            "AVG Close Price",
                            "Combined Profit",
                            "Profit per pair",
                            "Parallelism",
                            "Underwater",
                            "Relative Drawdown",
                        ])
    fig['layout'].update(title="Freqtrade Profit plot")
    fig['layout']['yaxis1'].update(title='Price')
    fig['layout']['yaxis2'].update(title=f'Profit {stake_currency}')
    fig['layout']['yaxis3'].update(title=f'Profit {stake_currency}')
    fig['layout']['yaxis4'].update(title='Trade count')
    fig['layout']['yaxis5'].update(title='Underwater Plot')
    fig['layout']['yaxis6'].update(title='Underwater Plot Relative (%)', tickformat=',.2%')
    fig['layout']['xaxis']['rangeslider'].update(visible=False)
    fig.update_layout(modebar_add=["v1hovermode", "toggleSpikeLines"])

    fig.add_trace(avgclose, 1, 1)
    fig = add_profit(fig, 2, df_comb, 'cum_profit', 'Profit')
    fig = add_max_drawdown(fig, 2, trades, df_comb, timeframe, starting_balance)
    fig = add_parallelism(fig, 4, trades, timeframe)
    # Two rows consumed
    fig = add_underwater(fig, 5, trades, starting_balance)

    for pair in pairs:
        profit_col = f'cum_profit_{pair}'
        try:
            df_comb = create_cum_profit(df_comb, trades[trades['pair'] == pair], profit_col,
                                        timeframe)
            fig = add_profit(fig, 3, df_comb, profit_col, f"Profit {pair}")
        except ValueError:
            pass
    return fig


def generate_plot_filename(pair: str, timeframe: str) -> str:
    """
    Generate filenames per pair/timeframe to be used for storing plots
    """
    pair_s = pair_to_filename(pair)
    file_name = 'freqtrade-plot-' + pair_s + '-' + timeframe + '.html'

    logger.info('Generate plot file for %s', pair)

    return file_name


def store_plot_file(fig, filename: str, directory: Path, auto_open: bool = False) -> None:
    """
    Generate a plot html file from pre populated fig plotly object
    :param fig: Plotly Figure to plot
    :param filename: Name to store the file as
    :param directory: Directory to store the file in
    :param auto_open: Automatically open files saved
    :return: None
    """
    directory.mkdir(parents=True, exist_ok=True)

    _filename = directory.joinpath(filename)
    plot(fig, filename=str(_filename),
         auto_open=auto_open, config=dict({'scrollZoom': True, 'responsive': True}))
    logger.info(f"Stored plot as {_filename}")


def load_and_plot_trades(config: Dict[str, Any]):
    """
    From configuration provided
    - Initializes plot-script
    - Get candle (OHLCV) data
    - Generate Dafaframes populated with indicators and signals based on configured strategy
    - Load trades executed during the selected period
    - Generate Plotly plot objects
    - Generate plot files
    :return: None
    """
    strategy = StrategyResolver.load_strategy(config)

    exchange = ExchangeResolver.load_exchange(config['exchange']['name'], config)
    IStrategy.dp = DataProvider(config, exchange)
    strategy.ft_bot_start()
    strategy.bot_loop_start()
    plot_elements = init_plotscript(config, list(exchange.markets), strategy.startup_candle_count)
    timerange = plot_elements['timerange']
    trades = plot_elements['trades']
    pair_counter = 0
    for pair, data in plot_elements["ohlcv"].items():
        pair_counter += 1
        logger.info("analyse pair %s", pair)

        df_analyzed = strategy.analyze_ticker(data, {'pair': pair})
        df_analyzed = trim_dataframe(df_analyzed, timerange)
        if not trades.empty:
            trades_pair = trades.loc[trades['pair'] == pair]
            trades_pair = extract_trades_of_period(df_analyzed, trades_pair)
        else:
            trades_pair = trades

        fig = generate_candlestick_graph(
            pair=pair,
            data=df_analyzed,
            trades=trades_pair,
            indicators1=config.get('indicators1', []),
            indicators2=config.get('indicators2', []),
            plot_config=strategy.plot_config if hasattr(strategy, 'plot_config') else {}
        )

        store_plot_file(fig, filename=generate_plot_filename(pair, config['timeframe']),
                        directory=config['user_data_dir'] / 'plot', auto_open=config.get('plot_auto_open', False))

    logger.info('End of plotting process. %s plots generated', pair_counter)


def plot_profit(config: Dict[str, Any]) -> None:
    """
    Plots the total profit for all pairs.
    Note, the profit calculation isn't realistic.
    But should be somewhat proportional, and therefor useful
    in helping out to find a good algorithm.
    """
    if 'timeframe' not in config:
        raise OperationalException('Timeframe must be set in either config or via --timeframe.')

    exchange = ExchangeResolver.load_exchange(config['exchange']['name'], config)
    plot_elements = init_plotscript(config, list(exchange.markets))
    trades = plot_elements['trades']
    # Filter trades to relevant pairs
    # Remove open pairs - we don't know the profit yet so can't calculate profit for these.
    # Also, If only one open pair is left, then the profit-generation would fail.
    trades = trades[(trades['pair'].isin(plot_elements['pairs']))
                    & (~trades['close_date'].isnull())
                    ]
    if len(trades) == 0:
        raise OperationalException("No trades found, cannot generate Profit-plot without "
                                   "trades from either Backtest result or database.")

    # Create an average close price of all the pairs that were involved.
    # this could be useful to gauge the overall market trend
    fig = generate_profit_graph(plot_elements['pairs'], plot_elements['ohlcv'],
                                trades, config['timeframe'],
                                config.get('stake_currency', ''),
                                config.get('available_capital', config['dry_run_wallet']))
    store_plot_file(fig, filename='freqtrade-profit-plot.html',
                    directory=config['user_data_dir'] / 'plot',
                    auto_open=config.get('plot_auto_open', False))

Now call python -m freqtrade plot-dataframe -c 'config.json' -d '/Downloads/data_crypto' -s POC --timerange 20221122-20221123 --timeframe 15m -p DOGE/USDT --no-trades and get:
Снимок экрана 2022-12-18 в 11 27 49

Chart shows clusters, POC (point of control), VAH (value area high), VAL (value area low), Delta (ask - bid), Time (how much seconds candle executes). You can set clusters to auto detection or by values.

I very recommend this channel https://www.youtube.com/@ATAS_EN/videos, authors explain many different smart things about market profiling and volume analysis strategies.
Example Basic Trading Signals of the Dynamic Levels Channel Indicator https://www.youtube.com/watch?v=CwRr1iBROTQ

@TheJoeSchr
Copy link
Contributor

Wow thanks for sharing

@HouJun11220033
Copy link

Wow thanks for sharing

@ForrestDevs
Copy link

Amazing thank you! @fridary Have you seen TheMas7er on YouTube? He demonstrates an interesting concept that is largely based off statistics, maybe something here might be useful in creating a statistics model for freqtrade.

@BlackMamba911
Copy link

Thanks for sharing. Could you please also give the structure of the database and tables that you use with ClickHouse ?

Regards

@fridary
Copy link
Author

fridary commented Mar 14, 2023

@BlackMamba911
Here is a getting real time tick data by api, but it needs some fixes and I made for fun, for production is better use sockets.
At starts it creates clickhouse tables.

import asyncio
import random
import time
from datetime import datetime, timedelta
from dateutil.relativedelta import relativedelta
import ccxt.async_support as ccxt
from pprint import pprint
import pandas as pd
from clickhouse_driver import Client

#proxy = ['http://user:login@ip:port']
proxy = [False]
ua = ['Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.45 Safari/537.36','Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:94.0) Gecko/20100101 Firefox/94.0','Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.55 Safari/537.36','Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.93 Safari/537.36','Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:95.0) Gecko/20100101 Firefox/95.0','Mozilla/5.0 (Windows NT 10.0; rv:91.0) Gecko/20100101 Firefox/91.0','Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/15.1 Safari/605.1.15','Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.110 Safari/537.36','Mozilla/5.0 (X11; Linux x86_64; rv:94.0) Gecko/20100101 Firefox/94.0','Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.93 Safari/537.36','Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:94.0) Gecko/20100101 Firefox/94.0','Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.69 Safari/537.36','Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:94.0) Gecko/20100101 Firefox/94.0','Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.69 Safari/537.36','Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.45 Safari/537.36','Mozilla/5.0 (X11; Linux x86_64; rv:95.0) Gecko/20100101 Firefox/95.0','Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:95.0) Gecko/20100101 Firefox/95.0','Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:95.0) Gecko/20100101 Firefox/95.0','Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.55 Safari/537.36 Edg/96.0.1054.34','Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.55 Safari/537.36 Edg/96.0.1054.43','Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.45 Safari/537.36','Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/15.0 Safari/605.1.15','Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.110 Safari/537.36','Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.45 Safari/537.36','Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0','Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.69 Safari/537.36 OPR/81.0.4196.60','Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.45 Safari/537.36 Edg/96.0.1054.29','Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.1.2 Safari/605.1.15','Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.69 Safari/537.36','Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.93 Safari/537.36 Edg/96.0.1054.53','Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Firefox/91.0','Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.93 Safari/537.36','Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:94.0) Gecko/20100101 Firefox/94.0','Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.55 Safari/537.36 Edg/96.0.1054.41','Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.110 Safari/537.36 Edg/96.0.1054.57','Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.69 Safari/537.36 Edg/95.0.1020.53','Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.45 Safari/537.36','Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:94.0) Gecko/20100101 Firefox/94.0','Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.69 Safari/537.36 OPR/81.0.4196.61','Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.54 Safari/537.36','Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/15.1 Safari/605.1.15','Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.45 Safari/537.36 OPR/82.0.4227.23','Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:96.0) Gecko/20100101 Firefox/96.0','Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/15.2 Safari/605.1.15','Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.45 Safari/537.36','Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.93 Safari/537.36','Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.54 Safari/537.36','Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:93.0) Gecko/20100101 Firefox/93.0','Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.55 Safari/537.36','Mozilla/5.0 (Windows NT 10.0; rv:78.0) Gecko/20100101 Firefox/78.0','Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.81 Safari/537.36','Mozilla/5.0 (X11; CrOS x86_64 14150.87.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.124 Safari/537.36','Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Firefox/91.0','Mozilla/5.0 (X11; Linux x86_64; rv:93.0) Gecko/20100101 Firefox/93.0','Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.1.1 Safari/605.1.15','Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.131 Safari/537.36','Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.71 Safari/537.36','Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.81 Safari/537.36 OPR/80.0.4170.63']



async def fetch_tickers():
    return await asyncio.gather(*(asyncio.ensure_future(safe_trader(m, i)) for i, m in enumerate(mass * 1)))

async def safe_trader(mass, _id):
    async with sem:
        global client, db_name, boost

        exchange_id = mass['exchange']
        exchange = getattr(ccxt, exchange_id.replace('_futures', ''))({
            'verbose': False,
            'enableRateLimit': False,
            'rateLimit': 2 * 1000,
            'timeout': 8 * 1000,
        })
        if exchange_id.endswith('_futures'):
            exchange.options['defaultType'] = 'future'
        if not exchange.has['fetchTrades']:
            exit('no fetchTrades')
        if exchange.id in ['binance', 'binance_futures']:
            #exchange.options['fetchTradesMethod'] = 'publicGetTrades'
            exchange.options['fetchTradesMethod'] = 'publicGetAggTrades'

        # if _id == 0:
        #     pprint(await exchange.loadMarkets ())

        limit = 1000
        if exchange.id in ['binance', 'binance_futures']:
            limit = 1000
        if exchange.id == 'lbank2':
            limit = 600
        if exchange.id == 'ftx':
            limit = 5000
        if exchange.id == 'kraken':
            limit = None
            _symbol = _symbol.replace('USDT', 'USD')
        if exchange.id == 'hitbtc':
            limit = 1000
        if exchange.id == 'bitfinex2':
            limit = 5000
        if exchange.id == 'bitmex':
            limit = 1000
            _symbol += ':USDT'
        if exchange.id == 'poloniex':
            limit = 1000
        if exchange.id == 'upbit':
            limit = None # 200 limit
        if exchange.id == 'coinbasepro':
            limit = 1000 # 1000 default from api, 100 from ccxt comments
        if exchange.id == 'cex':
            limit = 1000
        if exchange.id == 'liquid':
            limit = 1000
        if exchange.id == 'gemini':
            limit = 50 # default 50, max allows 50
            #_symbol = _symbol.replace('USDT', 'USD')
        if exchange.id == 'okcoin':
            limit = 100 # 100 max allows
        if exchange.id == 'bitmart':
            limit = 50 # 50 max allows
        if exchange.id == 'aax':
            limit = 2000
        if exchange.id == 'bkex':
            limit = 50 # 51 max allows
        if exchange.id == 'mexc':
            limit = 500 # 621 max allows
        if exchange.id == 'digifinex':
            limit = 500 # 500 max по docs
        if exchange.id == 'whitebit':
            limit = 100 # 100 max allows
        # if exchange.id == 'bitrue':
        #     limit = 1000 # ~500 max allows
        if exchange.id == 'zb':
            limit = 50 # 50 max allows
        if exchange.id == 'kucoin':
            limit = 100 # 100 max allows
        if exchange.id == 'phemex':
            limit = 1000 # 1000 max allows
        if exchange.id == 'bigone':
            limit = 200 # 200 max allows
        if exchange.id == 'bitget':
            limit = 100 # 100 max allows
        if exchange.id == 'cryptocom':
            limit = 200 # 200 max allows
        if exchange.id == 'bittrex':
            limit = 100 # 100 max allows
        if exchange.id == 'woo':
            limit = 500 # 500 max allows
        if exchange.id == 'btcex':
            limit = 5000
        if exchange.id == 'bitforex':
            limit = 600 # 600 max allows
        if exchange.id == 'bibox':
            limit = 200 # 200 max allows
        if exchange.id == 'bytetrade':
            limit = 100 # 100 max allows
        if exchange.id == 'cdax':
            limit = 2000 # 2000 max allows
        if exchange.id == 'coinex':
            limit = 1000 # 1000 max allows
        if exchange.id == 'latoken':
            limit = 1000 # 1000 max allows

        proxy = mass['proxy']
        if proxy != False: exchange.aiohttp_proxy = proxy
        exchange.userAgent = random.choice(ua)

        proxy_err_count = 0
        while 1:
            take = None
            for i, b in enumerate(boost):
                if exchange_id == b['exchange']:
                    if (not b['last_update'] or b['next_update'] <= datetime.now()) and not b['processing']:
                        boost[i]['processing'] = True
                        take = b
                        break

            if not take:
                await asyncio.sleep(random.randint(20, 30) / 10)
                continue

            _symbol = take['symbol']
            if exchange.id == 'gemini':
                _symbol = _symbol.replace('USDT', 'USD')

            try:
                trades = await exchange.fetch_trades(_symbol, None, limit)
            except ccxt.BaseError as e:
                if type(e).__name__ == 'BadSymbol':
                    print(f"#{_id} BadSymbol symbol={_symbol} exchange={mass['exchange']}", type(e).__name__, str(e)[:200])
                    await asyncio.sleep(25)
                    boost[take['i']]['processing'] = False
                    continue
                boost[take['i']]['processing'] = False

                print(f"{datetime.now().strftime('%Y-%m-%d %H:%M:%S')} #{_id} ERR exchange={mass['exchange']} symbol={_symbol} proxy={proxy}", type(e).__name__, str(e)[:150])
                proxy_err_count += 1
                if proxy_err_count == 4:
                    print(f" REMOVED proxy={proxy}", type(e).__name__, str(e)[:200])
                    break
                await asyncio.sleep(45)
                continue

            if not trades:
                await asyncio.sleep(2)
                continue

            proxy_err_count = 0
            df = pd.DataFrame(trades)
            df['datetime'] = pd.to_datetime(df['datetime'])
            df.index = df['datetime']
            df = df[['datetime', 'id', 'timestamp', 'amount', 'price', 'side', 'type', 'takerOrMaker']]
            df.rename(columns={'datetime': 'date'}, inplace=True)
            df.sort_index(inplace=True)

            if exchange.id in ['bitmart', 'kucoin', 'woo', 'bibox']:
                df['id'] = df.apply(lambda row: int(str(str(row['timestamp']) + str(row['amount']).replace('.', ''))[-15:].split('e')[0]), axis=1)
            if exchange.id in ['aax', 'bkex', 'bittrex', 'bytetrade', 'cdax', 'latoken']:
                df['id'] = df.apply(lambda row: int(str(int.from_bytes(row['id'].encode(), 'little'))[:18]), axis=1)

            next_update = (df.index[-1] - df.index[0]).seconds / 13
            next_update = min(next_update, 60 * 1)
            if exchange.id in ['gemini', 'bitmart', 'bkex', 'zb', 'kucoin', 'cryptocom']: next_update = min(next_update, 8)
            if exchange.id in ['upbit', 'okcoin', 'whitebit', 'bigone', 'bitget', 'woo', 'digifinex', 'bibox', 'bytetrade']: next_update = min(next_update, 18)
            boost[take['i']]['last_update'] = datetime.now()
            boost[take['i']]['next_update'] = datetime.now() + timedelta(seconds=next_update)
            boost[take['i']]['processing'] = False

            ids = client.execute(f"SELECT id from {db_name}.{take['exchange']}_{take['symbol'].replace('/', '_')} \
                ORDER BY id DESC LIMIT {limit if limit else 1000}")
            if ids and len(ids):
                ids = list(list(zip(*ids))[0])
                df['id'] = df['id'].astype(int)
                df = df[~df['id'].isin(ids)]

            x = client.insert_dataframe(f"INSERT INTO {db_name}.{take['exchange']}_{take['symbol'].replace('/', '_')} VALUES", df)
            await asyncio.sleep(2)
            


        await exchange.close()




if __name__ == '__main__':


    symbols = ['BTC/USDT', 'ETH/USDT', 'SOL/USDT', 'AAVE/USDT', 'XRP/USDT']

    db_name = 'trades'
    boost = {
        'upbit': set(symbols),
        'coinbasepro': set(symbols),
        # 'cex': symbols,
        # 'liquid': set(symbols) - set(['MATIC/USDT']),
        # 'gemini': symbols,
        # 'okcoin': set(symbols) - set(['MATIC/USDT', 'XRP/USDT', 'LINK/USDT']),
        # 'bitmart': symbols + ['NEAR/USDT'],
        # 'aax': symbols + ['NEAR/USDT'],
        # 'bkex': symbols + ['NEAR/USDT'],
        # 'mexc': symbols + ['NEAR/USDT'],
        # 'whitebit': symbols + ['NEAR/USDT'],
        # 'zb': symbols,
        # 'kucoin': symbols,
        # 'phemex': symbols + ['NEAR/USDT'],
        # 'bigone': symbols,
        # 'bitget': symbols + ['NEAR/USDT'],
        # 'cryptocom': symbols + ['NEAR/USDT'],
        # 'bittrex': symbols,
        # 'woo': symbols + ['NEAR/USDT'],
        # 'btcex': set(symbols) - set(['MATIC/USDT']),
        # 'bitforex': set(symbols) - set(['MATIC/USDT', 'LINK/USDT']),
        # 'digifinex': set(symbols) - set(['MATIC/USDT']),
        # 'bibox': symbols + ['NEAR/USDT'],
        # 'bytetrade': set(symbols) - set(['MATIC/USDT']),
        # 'cdax': set(symbols) - set(['MATIC/USDT']),
        # 'coinex': symbols + ['NEAR/USDT'],
        # 'latoken': symbols + ['NEAR/USDT'],
    }


    exchanges = boost.keys()
    _boost = []
    i = 0
    for k, b in boost.items():
        for s in list(b):
            _boost.append({'exchange': k, 'symbol': s, 'last_update': None, 'next_update': None, 'processing': False, 'i': i})
            i += 1
    boost = _boost.copy()
    mass = []
    for ex in exchanges:
        for p in proxy:
            mass.append({'proxy': p, 'exchange': ex})
        

    client = Client(host='localhost', user='default', password='password', settings={'use_numpy': True})
    client.execute(f'CREATE DATABASE IF NOT EXISTS {db_name}')
    for b in boost:
        client.execute(f"CREATE TABLE IF NOT EXISTS {db_name}.{b['exchange']}_{b['symbol'].replace('/', '_')} \
            ( \
                date DateTime, \
                id UInt64, \
                timestamp UInt64, \
                amount Float32, \
                price Float32, \
                side String, \
                type String, \
                takerOrMaker String \
            ) ENGINE = MergeTree() ORDER BY (id, timestamp) PRIMARY KEY id")
    #print(client.execute('SHOW DATABASES'))
    #print(client.execute(f'SHOW TABLES from {db_name}'))


    sem = asyncio.Semaphore(len(mass) * 1)
    loop = asyncio.get_event_loop()
    loop.run_until_complete(fetch_tickers())

@TheJoeSchr
Copy link
Contributor

@fridary I'm currently working on bringing this into freqtrade. you might be interested in my branch over at https://github.com/TheJoeSchr/freqtrade/tree/feature/fetch-public-trades

Currently it's just a proof of concept, a lot more polishing and optimization is needed before it's production ready

I couldn't use anything from what you shared but knowing it's possible made me start. Anyways you might still be interested or have some ideas. If you want to discuss this more we are all hanging out in freqtrade discord's #core-dev channel: https://www.freqtrade.io/en/stable/#help-discord

@Jarrodsz
Copy link

Did this ever got into freqtrade or in a more solid form to test?

Inspirational efforts

@TheJoeSchr
Copy link
Contributor

Did this ever got into freqtrade or in a more solid form to test?

Inspirational efforts

Not yet. I currently just started a dry run of the bot, only working with binance data so far. But it seems to work quite well if you want to check out my branch linked above

@xmatthias xmatthias linked a pull request Aug 19, 2023 that will close this issue
10 tasks
@TonyChen-SH
Copy link

that' so cool.

@tsj83
Copy link

tsj83 commented Oct 9, 2023

Trades-data is not really necessary to calculate a volume profile - as also shown in #5795 .

That's not right. Yes, you can load 1 minute candles and calculate price level at each candle by volume, but you will get noise and dirty level values. Really valuable information you get only from reading trade prints / order flow. Things like clustering, footprints, deltas, balances and many other.

Footprint cluster chart example with bid-ask distribution. There are a lot of instruments and trading strategies just by reading what is inside candle. Снимок экрана 2022-05-18 в 08 46 05

This how I store footprints (15m BTC/USDT example):

'2022-05-01 20:00:00': {
    'high': 37847.3,
    'low': 37682.45,
    'open': 37805.06,
    'close': 37818.06,
    'volume': 185.39344356999996,
    'trades': 260,
    'footprint': {
        37682: {'ask_qty': 0,
                'ask_trades': 0,
                'bid_qty': 0.0025,
                'bid_trades': 1},
        # ...
        37743: {'ask_qty': 0.008,
                'ask_trades': 3,
                'bid_qty': 0.008,
                'bid_trades': 3},
        37744: {'ask_qty': 0.008,
                'ask_trades': 3,
                'bid_qty': 11.7304, # huge bid volume at 37744 level
                'bid_trades': 6},
        37745: {'ask_qty': 0.006,
                'ask_trades': 2,
                'bid_qty': 0.006,
                'bid_trades': 2},
        # ...
        37847: {'ask_qty': 0,
                'ask_trades': 0,
                'bid_qty': 1.052,
                'bid_trades': 1}
    },
},

Anyway, freqtrade is magic tool with your development and help!

@fridary curious: how did you fetch this data? I have ATAS and I am looking to do exactly that. I've been digging through the documentation for days. Could you please share the code? :)

@fridary
Copy link
Author

fridary commented Oct 9, 2023

@thiago-springer-jota I shared a lot of code above.

@tsj83
Copy link

tsj83 commented Oct 9, 2023

'footprint': {
37682: {'ask_qty': 0,
'ask_trades': 0,
'bid_qty': 0.0025,
'bid_trades': 1},
# ...

ATAS is written in C# and the code above is python. The only way to get data out of ATAS is with C#. This bit in particular:

'footprint': {
37682: {'ask_qty': 0,
'ask_trades': 0,
'bid_qty': 0.0025,
'bid_trades': 1},
# ...

Is of great interest. None of the code above has "bid_trades". If you can share, it would be amazing. Thanks

@Vain10
Copy link

Vain10 commented Nov 13, 2023

I made my own volume profile to define poc ,vah & val by chatgpt and i just use it in 4h to define major support and resistance it was great in back testing but apparently doesnt have any data in bingx to calculate it in live trading feel free to make it better and resolve its bugs!

  def volume_profile(self, dataframe: DataFrame, num_candles):  
        # Initialize new columns
        dataframe['POC'] = np.nan
        dataframe['VAH'] = np.nan
        dataframe['VAL'] = np.nan
    
        for i in range(num_candles-1, len(dataframe)):
        # Slice the DataFrame to only include the last 'num_candles' rows
          sliced_df = dataframe.iloc[i-num_candles+1:i+1].copy()
        
        # Calculate volume profile
          sliced_df['VolumeProfile'] = sliced_df['volume'] / sliced_df['close']
        
        # Calculate Point of Control
          poc_price = sliced_df.loc[sliced_df['VolumeProfile'].idxmax()]['close']
        
        # Calculate Value Area
          total_volume = sliced_df['VolumeProfile'].sum()
          sorted_df = sliced_df.sort_values(by='VolumeProfile', ascending=False)
          sorted_df['CumulativeVolume'] = sorted_df['VolumeProfile'].cumsum()
          va_df = sorted_df[sorted_df['CumulativeVolume'] <= total_volume * 0.7]
        
        # Calculate Value Area High and Low
          vah_price = va_df['close'].max()
          val_price = va_df['close'].min()
        
        # Assign values to DataFrame
          dataframe.loc[dataframe.index[i], 'POC'] = poc_price
          dataframe.loc[dataframe.index[i], 'VAH'] = vah_price
          dataframe.loc[dataframe.index[i], 'VAL'] = val_price
          dataframe = dataframe.sort_values('date', ascending=True)
          return dataframe

@perkmeister
Copy link

perkmeister commented Nov 18, 2023

This line means it has lookahead bias:
sliced_df = dataframe.iloc[i-num_candles+1:i+1].copy()

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Discussion Proposals which should be discussed before working on it.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

15 participants