تداول استراتيجية آلة التعلم

تداول استراتيجية آلة التعلم

تاريخ خيارات الأسهم
الاتجاه الفوركس استراتيجية الفوركس
استراتيجيات التداول الرياضية التي تعمل


نصائح تداول الفوركس دينغان مشروط كيسيل فاندس الفوركس سوغاركرم الفوركس ياهو المالية النقد الاجنبى تداول الخيارات الأسبوعية المكالمات المشمولة ثينكفوريكس سنغافورة

تعلم الآلة للتجارة. الذكاء الاصطناعي (منظمة العفو الدولية) وآلة التعلم (مل) هي ثورة بهدوء تقريبا جميع مجالات حياتنا. هل تعلم أن أحدث خوارزميات التداول تستخدم هذه التقنيات على نطاق واسع؟ قد يفاجأ أن تعلم أن صناديق التحوط التعلم الآلي تفوق بالفعل بشكل كبير صناديق التحوط المعمم، فضلا عن الأموال الكمية التقليدية، وفقا لتقرير فاليوالك. يمكن أن تكون أنظمة مل والذكاء الاصطناعي أدوات مفيدة بشكل لا يصدق للبشر التنقل في عملية صنع القرار المعنية مع الاستثمارات وتقييم المخاطر. غالبا ما يكون تأثير العواطف البشرية على قرارات التداول أكبر عائق أمام التفوق في الأداء. الخوارزميات وأجهزة الكمبيوتر اتخاذ القرارات وتنفيذ الصفقات أسرع من أي إنسان يمكن، والقيام بذلك خالية من تأثير العواطف. هناك العديد من أنواع مختلفة من التداول الخوارزمية. وفيما يلي بعض الأمثلة: خوارزميات تنفيذ التجارة، والتي تفكك الصفقات إلى أوامر أصغر لتقليل التأثير على سعر السهم. ومن الأمثلة على ذلك استراتيجية متوسط ​​السعر المرجح (فواب) استراتيجية تنفيذ الخوارزميات التي تجعل الصفقات بناء على إشارات من بيانات السوق في الوقت الحقيقي. ومن الأمثلة على ذلك الاستراتيجيات القائمة على الاتجاهات التي تنطوي على المتوسطات المتحركة، وكسر القنوات، وحركات مستوى الأسعار والمؤشرات الفنية الأخرى. الخوارزميات الشبح / الألعاب التي تهدف إلى الكشف والاستفادة من تحركات الأسعار الناجمة عن الصفقات الكبيرة و / أو استراتيجيات خوارزمية أخرى. فرص المراجحة. على سبيل المثال، حيث يمكن أن يتداول السهم في سوقين منفصلين لسعرين مختلفين ويمكن التقاط الفرق في السعر عن طريق بيع المخزون الأعلى سعرا وشراء الأسهم الأقل سعرا. عندما طرحت استراتيجيات التداول الخوارزمية لأول مرة، كانت مربحة بعنف وسرعان ما اكتسبت حصتها في السوق. في مايو 2017، قالت شركة أبحاث سوق رأس المال تب المجموعة أن التداول عالية التردد (هفت) شكلت 52٪ من متوسط ​​حجم التداول اليومي. ولكن مع ازدياد المنافسة، انخفضت الأرباح. في هذه البيئة الصعبة على نحو متزايد، يحتاج التجار إلى أداة جديدة لمنحهم ميزة تنافسية وزيادة الأرباح. والخبر السار هو أن الأداة هنا الآن: تعلم الآلة. يتضمن التعلم الآلي تغذية عينات بيانات الخوارزمية، وعادة ما تكون مشتقة من الأسعار التاريخية. وتتكون عينات البيانات من متغيرات تسمى التنبؤات، فضلا عن متغير الهدف، وهو النتيجة المتوقعة. تتعلم الخوارزمية استخدام متغيرات التنبؤ للتنبؤ بالمتغير المستهدف. يوفر التعلم الآلي عدد من المزايا الهامة على البرامج الخوارزمية التقليدية. يمكن لهذه العملية تسريع البحث عن استراتيجيات التداول خوارزمية فعالة من خلال أتمتة ما هو في كثير من الأحيان عملية شاقة، يدوية. كما أنه يزيد من عدد الأسواق التي يستطيع الفرد مراقبتها والاستجابة لها. والأهم من ذلك، أنها توفر القدرة على الانتقال من العثور على الجمعيات استنادا إلى البيانات التاريخية لتحديد والتكيف مع الاتجاهات عند تطورها. إذا كان بإمكانك أتمتة عملية أخرى يقوم بها يدويا؛ لديك ميزة تنافسية. إذا كنت تستطيع زيادة عدد الأسواق كنت في، لديك المزيد من الفرص. وفي العالم صفر مجموع من التداول، إذا كنت تستطيع التكيف مع التغيرات في الوقت الحقيقي في حين أن الآخرين يقفون لا يزال، صالحك سوف تترجم إلى الأرباح. هناك العديد من الاستراتيجيات التي تستخدم تعلم الآلة لتحسين الخوارزميات، بما في ذلك: الانحدارات الخطية، والشبكات العصبية، والتعلم العميق، وآلات دعم ناقلات، و بايس ساذجة، على سبيل المثال لا الحصر. وتعمل صناديق معروفة مثل القلعة و رينيسانس تيشنولوجيز و بريدجواتر أسوسياتس و تو سيغما إنفستمينتس على اتباع استراتيجيات التعلم الآلي كجزء من نهجها الاستثماري. في سيغمودال، لدينا الخبرة والدراية لمساعدة التجار دمج مل في استراتيجيات التداول الخاصة بهم. دراسة الحالة لدينا. في أحد مشاريعنا، قمنا بتصميم نظام ذكي لتخصيص الأصول التي تستخدم التعلم العميق ونظرية المحفظة الحديثة. وتتمثل المهمة في تنفيذ استراتيجية استثمار يمكن أن تتكيف مع التغيرات السريعة في بيئة السوق. وكان نموذج قاعدة بيانات الذكاء الاصطناعي مسؤولا عن التنبؤ بعائدات الأصول استنادا إلى البيانات التاريخية. وقد تحقق ذلك من خلال تنفيذ وحدات الذاكرة الطويلة الأجل القصيرة، والتي هي تعميم متطور للشبكة العصبية المتكررة. هذا العمارة خاصة يمكن تخزين المعلومات لخطوات زمنية متعددة، والتي جعلت ممكنة من قبل خلية الذاكرة. هذه الخاصية تمكن النموذج من معرفة أنماط زمنية طويلة ومعقدة في البيانات. ونتيجة لذلك، تمكنا من التنبؤ بالعوائد المستقبلية للأصل، فضلا عن عدم اليقين من تقديراتنا باستخدام تقنية جديدة تسمى فارياتيونال دروبوت. من أجل تعزيز توقعاتنا، استخدمنا ثروة من بيانات السوق، مثل العملات والمؤشرات، وما إلى ذلك في نموذجنا، بالإضافة إلى العائدات التاريخية للأصول ذات الصلة. وأدى ذلك إلى أكثر من 400 سمة استخدمناها لجعل التنبؤات النهائية. وبطبيعة الحال، فإن العديد من هذه السمات كانت مترابطة. تم تخفيف هذه المشكلة من خلال تحليل المكونات الرئيسية (يكا)، مما يقلل من أبعاد المشكلة و يميز الميزات. ثم استخدمنا توقعات العودة والمخاطر (عدم اليقين) لكافة الأصول كمدخلات لخوارزمية تحسين التباين المتوسط، والتي تستخدم حلالا من الدرجة الثانية لتقليل المخاطر لعائد معين. وتحدد هذه الطريقة تخصيص الأصول، وهي متنوعة، وتضمن أدنى مستوى ممكن من المخاطر، بالنظر إلى توقعات العائدات. وأدى الجمع بين هذه النماذج إلى وضع استراتيجية استثمارية ولدت عائدا سنويا بنسبة 8٪، وهو ما يزيد بنسبة 23٪ عن أي استراتيجية قياسية أخرى تم اختبارها على مدى فترة سنتين. اتصل بنا لمعرفة المزيد. استراتيجيات منظمة العفو الدولية يتفوق. ومن الصعب العثور على بيانات الأداء لاستراتيجيات منظمة العفو الدولية نظرا لطبيعتها الملكية، ولكن شركة أبحاث صندوق التحوط يوركيدج قد نشرت بعض البيانات الإعلامية. ويعرض الرسم البياني أدناه أداء مؤشر صندوق التحوط الخاص بالتعليم من أي / آلة التعلم مقابل الصناديق التقليدية للتحويل الكمي والصحي من 2010 إلى 2016. ويتبع المؤشر 23 صندوقا في المجموع، منها 12 صندوقا لا يزال يعيش. يلاحظ يوركاهيدج أن: "تفوقت صناديق التحوط للتعلم الآلي / الآلي على كل من الكوانت التقليدية ومتوسط ​​صندوق التحوط منذ عام 2010، حيث حققت عائدات سنوية بلغت 8.44٪ خلال هذه الفترة مقارنة مع 2.62٪ و 1.62٪ و 4.27٪ ل كتا ومتابعي الاتجاه ومتوسط ​​التحوط العالمي صندوق على التوالي ". يوفر يوركاهيدج أيضا الجدول التالي مع الوجبات السريعة الرئيسية: الجدول 1: الأداء بالأرقام - مؤشر صندوق التحوط للتعلم الآلي / مقابل التعلم الآلي مقابل المبالغ المتداولة وصناديق التحوط التقليدية. وقد تفوقت صناديق التحوط للتعلم الآلي / الآلي على متوسط ​​صندوق التحوط العالمي لجميع السنوات باستثناء عام 2012. وبعيدا عن عامي 2011 و 2014، تجاوزت عائدات صناديق التحویلات الخاصة بالتعلیم الآلي / التعلم الآلي تلك الخاصة بالاستراتیجیات التقلیدیة للتحویلات النقدیة / إدارة العقود الآجلة، بینما کانت الإتجاهات المنھجیة ضعیفة الأداء بعد الاستراتیجیات فقط لعام 2014 عندما حققت ھذه المکاسب أرباحا قویة من العقود الآجلة للطاقة القصیرة. وخالل الفرتة السنوية من خمس سنوات وثالثة وسنتني، تفوقت صناديق التحوط للتعلم اآللي / التعلم اآللي على كل من احلسابات التقليدية ومتوسط ​​صندوق التحوط العاملي الذي حقق مكاسب سنوية بلغت 7.35٪ و 9.57٪ و 10.56٪ على التوالي خالل هذه الفرتات. كما سجلت صناديق التحوط للتعلم الآلي / التحصيل الآلي عوائد أفضل معدلة للمخاطر على مدى الفترتين السنويتين الأخيرتين و الثلاث سنوات الأخيرة مقارنة بجميع الأقران المبينة في الجدول أدناه، مع نسب شارب البالغة 1.51 و 1.53 على التوالي على التوالي. وفي حين كانت العائدات أكثر تقلبا مقارنة بمتوسط ​​صندوق التحوط (مقارنة بمؤشر صندوق التحوط في يوركاهيدج)، فإن صناديق التعلم الآلي / الآلي قد نشرت تقلبات سنوية أقل بكثير مقارنة بالاتجاهات المنهجية التالية. ويلاحظ يوركاهيدج أيضا أن صناديق التحوط للتعلم الآلي / الآلي "ترتبط ارتباطا سلبيا بمتوسط ​​صندوق التحوط (-0.267)" ولها "علاقة إيجابية إلى هامش هامشيا مع كتا / إدارة العقود الآجلة واتجاه الاستراتيجيات التالية"، والتي تشير إلى فوائد التنويع المحتملة لاستراتيجية الذكاء الاصطناعى. البيانات أعلاه توضح إمكانات في استخدام الذكاء الاصطناعي والتعلم الآلي في استراتيجيات التداول. لحسن الحظ، لا يزال التجار في المراحل الأولى من دمج هذه الأداة القوية في استراتيجيات التداول الخاصة بهم، مما يعني أن الفرصة لا تزال غير مستغلة نسبيا وإمكانيات كبيرة. هنا هو مثال لتطبيق أي في الممارسة العملية. نظام النص المالي في أريزونا (أزفينتكست) تخيل النظام الذي يمكن رصد أسعار الأسهم في الوقت الحقيقي، والتنبؤ تحركات أسعار الأسهم على أساس تيار الأخبار. هذا بالضبط ما يفعله أزفينتكست. تعيد هذه المقالة تجربة استخدام آلة ناقلات الدعم (سفم) لتداول S & أمب؛ P-500 وحققت نتائج ممتازة. وفيما يلي الجدول الذي يظهر كيف يؤديها بالنسبة إلى أعلى 10 صناديق الاستثمار الكمي في العالم: إستراتيجية باستخدام مؤشرات غوغل. استخدمت إستراتيجية التداول التجريبية الأخرى مؤشرات غوغل كمتغير. هناك عدد كبير من المقالات حول استخدام مؤشرات غوغل كمؤشر للمشاعر في السوق. وتتبعت التجربة في هذه الورقة التغيرات في حجم البحث لمجموعة من 98 عبارة بحث (بعضها يتعلق بسوق الأوراق المالية). مصطلح & كوت؛ الدين & كوت؛ تبين أن يكون أقوى، مؤشر الأكثر موثوقية أوهن توقع تحركات الأسعار في دجيا. في ما يلي مخطط أداء تراكمي. يعرض الخط الأحمر علامة & كوت؛ شراء مع الاستمرار & كوت؛ إستراتيجية. استفادت استراتيجية مؤشرات غوغل (الخط الأزرق) بشكل كبير من عائد بنسبة 326٪. يمكنني تعلم مل نفسي؟ تطبيق آلة التعلم إلى التداول هو توبيس واسعة ومعقدة أن يأخذ من الوقت لإتقان. ولكن إذا كنت مهتما، كنقطة انطلاق نوصي: مرة واحدة كنت على دراية هذه المواد، وهناك ألو دورة أوداسيتي شعبية على الساخن لتطبيق أساس التعلم آلة لتداول السوق. إذا كنت ترغب في تسريع عملية التعلم حتى، يمكنك استئجار مستشار. تأكد من طرح أسئلة صعبة قبل بدء المشروع. أو، يمكنك جدولة مكالمة قصيرة معنا لاستكشاف ما يمكن القيام به. أحتاج إلى أمثلة أكثر تحديدا تنطبق في قطاعي. من خلال دمج آلة التعلم في استراتيجيات التداول الخاصة بك، يمكن لمحفظتك التقاط المزيد من ألفا. ولكن تنفيذ استراتيجية استثمار ناجحة مل هو أمر صعب - ستحتاج إلى أشخاص غير عاديين موهوبين ذوي خبرة في مجال التجارة وعلوم البيانات للوصول إلى هناك. دعونا نساعدك على البدء. تبحث عن آلة التعلم الاستشارات؟ شارك هذا المنشور. الحصول على رؤى جديدة حول الحصول على البيانات للعمل! الاشتراك في التحديثات سيغمويدال. تجربتي مع دوكر سرب - كيفية معرفة متى كنت في حاجة إليها. في هذا المنصب أنا ذاهب لتحليل دور عامل الميناء في كل مرحلة من مراحل دورة التطبيق و هيليب؛ التعلم العميق لرؤية الكمبيوتر - تتجاوز تصنيف الصور والانحدار. منذ عام 2012 عندما ظهرت أليكسنيت وكسر تقريبا جميع السجلات في مسابقات تصنيف الصور، والمناظر الطبيعية من و هيليب.

تعلم آلة إستراتيجية التداول ذكر توماس ويكي هذا قبل بضع سنوات (انه حذف المساحات، لذلك ابحث عن & كوت؛ أبليديديبلارنينغتونهانسمومنتومترادينغستراتيسينستوكس) في موضوع على الأفكار التجارية. تاكيوتشي، L.، لي، Y. (2013). تطبيق التعلم العميق لتعزيز استراتيجيات التداول الزخم في الأسهم. نحن نستخدم أداة تلقائية تتألف من مكدسة آلات بولتزمان مقيدة لاستخراج الميزات من تاريخ أسعار الأسهم الفردية. نموذجنا قادر على اكتشاف نسخة محسنة من تأثير الزخم في الأسهم دون هندسة يدوية واسعة من ميزات المدخلات وتحقيق عائد سنوي قدره 45.93٪ خلال فترة الاختبار 1990-2009 مقابل 10.53٪ للزخم الأساسي. هل يمكن لأي شخص لديه رئيس لعلوم البيانات إنشاء إصدار Q من هذا؟ رائعة تماما. من المستحيل جدا بالنسبة لي أن أعلق في هذه المرحلة ولكن هناك عدد كبير من الأسئلة لدي التي سوف تحتاج للبحث عن إجابات. بالنسبة لي الجملة الأكثر إثارة للاهتمام هي كما يلي: نموذجنا ليس مجرد إعادة اكتشاف. وأنماط معروفة في أسعار الأسهم، ولكن تتجاوز ما. كان البشر قادرين على تحقيقه. هل من الممكن حقا للتعلم العميق أن تأخذ مجموعة بسيطة من العوائد وتحسين على & كوت؛ التوقعات & كوت؛ التي أدلى بها تطبيق استراتيجية الزخم بسيطة؟ ويبدو أن هذه الأوراق تشير إلى أن هذا هو الحال. بعد قراءة الورقة مرتين أو ثلاث مرات ما زلت غير واضح تماما ما كل & كوت؛ كومة & كوت؛ في الواقع ولكن لا شك أنني سوف تتعثر في نهاية المطاف على نوع من الاستنتاج. لحسن الحظ، تأتي هذه الورقة في وقت كنت قد قررت أن يتقاعد من البحوث مملة بشكل لا يصدق لقد فعلت حتى الآن. لقد قررت & كوت؛ تعلم & كوت؛ منظمة العفو الدولية والتعلم العميق. أو على الأقل محاولة. أنا بعيدة كل البعد عن أن يكون لديه أي تطبيق على التنبؤ على المدى الطويل لأسعار الأسهم ولكن هذه المادة يبدو أن تشير إلى خلاف ذلك. وأنا أتطلع إلى معرفة ما إذا كان هذا البحث قد اكتشف بالفعل الدورادو أو ما إذا كانت عوامل أخرى في اللعب مما سيجعل هذا الخط من البحث لا مثيل له كما معظم غيرها في الأسواق المالية. تدريب شبكة عصبية عميقة على البيانات كوانتوبيان سيكون صعبا إلا إذا كنت يمكن تشغيل أجهزة الكمبيوتر المحمولة / خوارزميات على الأجهزة مع غبو قوية المرفقة. إذا كان لديك حاليا الوصول إلى بيانات التداول ذات الصلة، هل يمكن تدريب شبكة من ذلك على آلات غير كوانتوبيان ومن ثم ترجمة صافي الناتجة إلى سسيبي للتنفيذ في إطار كوانتوبيان. مثيرة جدا للاهتمام لقراءة بعض الأوراق الأخرى من ستانفورد على التعلم العميق تطبيقها على الأسواق. وتدعي الورقة المشار إليها أكثر من 50٪ تصنيف دقة ما إذا كانت الصفقات في نهاية المطاف الفائزين أو الخاسرين في الشهر التالي. فقط باستخدام السعر كمدخلات. النموذج هو الصحيح 53.84٪ من الوقت عندما يتنبأ الفئة 1 و. أقل إلى حد ما 53.01٪ من الوقت عندما يتنبأ الفئة 2. نضع في اعتبارنا أن نموذجية غير مزينة الطراز القديم استراتيجية تتبع عادة يوفر 40٪ الفوز الصفقات والأرباح عن طريق تشغيل الفائزين وقطع الخاسرين. إذا كان العمل في عام 2013 سوف تعمل أكثر من ذلك؟ وأعتقد أن البنوك ومكاتب الوساطة لديها جيوش من دكتوراه كتابة التعليمات البرمجية من هذا القبيل. كثير من الناس يعتقدون أن الطريق. وأنا أعرف ما تقصده. ولكن إذا كان صحيحا ثم قد أيضا التخلي تماما. كما قد كوانتوبيان. ليس لدي أي فكرة عما إذا كان لا يزال يعمل ولكنني أعتزم تكرار الدراسة. كل ما أثق به هو جهلتي. كان هناك موضوع في حين يعود مرة واحدة حيث حاول بعض واحد هذا باستخدام واحدة من مكتبات التعلم الآلي على سهم واحد: التنبؤ حركة الأسعار عبر الأنظمة والتعلم الآلي. قد يكون مكانا جيدا للبدء. تشغيله بطيئة جدا. لتسريع الأمور قد ترغب في تحميل بيانات الأسعار من يوداتا (أو موقع آخر) والعمل من ذلك على الجهاز الخاص بك. أنتوني، وجدت هذا بيثون رمز التعلم آلة (وما يرتبط بها من دورة موك) ويعتقد أنك قد تجد أنه من المفيد: جونويتنور / آلة التعلم والتدريبات في بيثون جزء 1 / وقد نشرت مجموعة أخرى أرقام دقة أفضل (82٪ مقابل 53٪). غير متأكد من الجودة على الرغم من. ربما كنت قد مجرد التواصل مع المؤلفين حول تنفيذها. يمكن أن يتم الإدارة المستندة إلى النتائج في R مع ديبنيت. مثير للإعجاب. وتستند المنهجية في وصلة سبرينجر أيضا على السعر فقط كمدخلات، على الرغم من أنه ربما لا ينبغي أن يفاجأ المرء بدرجة أكبر من الدقة: حيث يتوقع ذلك دقيقة واحدة في حين يتوقع مشروع لي شهر واحد. أنا أركز على بيثون، كيراس وتيانو. وكذلك سكليرن. هل الورقة متاحة بحرية في مكان ما؟ أنتوني - نعم، بعض الاختلافات التنفيذ. بيثون يمكن إجراء مكالمات إلى R إذا لزم الأمر. حاولت باستخدام بن لثعبان؟ معرفتي الحالية هي طفولي. أنا بداية من الصفر على الموضوع كله وبناء أنس من الصفر لتجربة استخدام بعض الكتب المدرسية نودي. أنا مهتم في الحقل كله حتى تبحث في أي تقنيات مل يمكن أن تكون مفيدة بما في ذلك الإدارة القائمة على النتائج. وشكلي هو أنه بقدر ما يتعلق الأمر الاستثمار على المدى الطويل هذا سوف تتحول كلها إلى مضيعة للوقت. أو بالأحرى أنه لن يوفر لي عوائد تعديل المخاطر أفضل من نظام 50/50 بسيطة أوجزت على موقع الويب الخاص بي. ولكن سنرى. أنا حريص على اطلاق النار على أضواء خارجا كما أي شخص آخر ولكن نعرف من الخبرة أن هذه المشاريع عادة ما تتحول إلى حد ما مختلفا عن واحد ربما كان يأمل! عندما أكون أكثر قليلا أسفل الخط سوف أتصل تاكوتشي لي ونرى ما فعله مزيد (إذا كان أي شيء) مع الاستراتيجية المحددة. وأتساءل عما إذا كان فعلا تداول ذلك؟ إما لنفسه أو أصحاب عمله. باتريك: شكرا لك. يا إلهي، لقد لاحظت ذلك في الورقة المشار إليها: البيانات المستخدمة للتدريب والاختبار هي المعامل آكل عن طريق القراد. من سبتمبر إلى نوفمبر من عام 2008. 1 الأسهم اختبار لمدة 3 أشهر! فوجئت أنها لم تأخذ أكثر قليلا من ذلك ولكن من يدري ربما ريسولست كان سيكون نفسه بالنسبة للأسهم المختلفة وفترات؟ مرحبا أنتوني ومجموعة. قضيتان: كم عدد المحاكمات التي شاركت في تحقيق هذا الأداء المتفوق؟ ليس واضحا. هل قاموا بتعديل معلمات الإدارة القائمة على النتائج حتى حصلت على النتيجة المرجوة؟ بالإضافة إلى نظرة التحيز قدما التي يدعون أنها لا تكون قضية، وهناك أيضا التطفل البيانات والتحيز الاختيار. في الواقع، انحياز الاختيار يمكن أن يكون كبيرا جدا. ونشرت الدراسة في نهاية عام 2013، ولكن عينة عينة الاختبار انتهت في عام 2009. وليس هناك ما يدعو إلى ذلك إلا في حالة أن الأداء المتفوق جاء من البيع القصير خلال عامي 2000 و 2008 الأسواق الدب، وفي هذه الحالة اختفى بعد 2009. مطالبات تفوق الأداء الزخم من قبل غلابادانيديس تم كشفها مؤخرا من قبل البروفسور زاكامولين بعد أن أظهر كان هناك نظرة إلى الأمام التحيز في الحسابات. مزيد من المعلومات حول هذه المسألة وغيرها من القضايا، وكذلك بشأن ظروف السوق الخاصة التي تؤدي إلى ارتفاع t- احصائيات، في ورقتي الأخيرة، paper.ssrn / sol3 / paper.cfm؟ abstract_id = 2810170. هل قام أي شخص بفحص التقنية المقترحة من قبل لي وآخرون؟ أنا أذهب (باستخدام بيانات كواندل مجانية) ولكنني أجد صعوبة في المتابعة. يمكنني التعامل مع الجوانب مل. ولكنني لست متأكدا تماما من كيفية تغليف البيانات. أعتقد أنه شيء من هذا القبيل: في وقت معين من الوقت لمخزون معين يمكننا بناء (المسمى) البند التدريب باستخدام قيمة 13 شهرا السابقة (وما يليها 1 أشهر يستحق) من البيانات اليومية لهذا المخزون. نحن نستخدم هذه البيانات لبناء 12 عوائد تراكمية شهرية تنتهي في الشهر القصير من لحظة لدينا. لذا، فإنني أقوم فقط بالإضافة يوميا لأسعار Adj_Close & أمب؛ يبصقون قيمة كل 30 أو نحو ذلك يمر. الآن يحصل على اهتمام. يفعلون نفس الشيء لكل مخزون آخر في هذه اللحظة، والحصول على قيمة z لأسهمنا على هذه المجموعة (أي # من الانحرافات المعيارية عن المتوسط). وبالتالي فإن حركة هذه القيمة Z تظهر نمو هذا السهم معين بالنسبة للسوق كله. منذ خوارزمية سوف تستثمر قدرا معينا من المال في السوق، ومجرد تحويله بين الأسهم، وهذا هو ما تريد! يبدو أنهم يفعلون ذلك لكل واحد من 12 كومريتس الشهرية. ثم يفعلون نفس العملية خلال ال 30 يوما السابقة. هذا في الواقع يجعل الكثير من الشعور لأنك تريد أن تكون تغذية في البيانات مع يعني 0 جولة حول (-1، +1) مجموعة في ن الخاص بك. بحيث يغطي بيانات المدخلات. (هناك مدخل إضافي واحد وهو عبارة عن علامة بداية العام، ولكن يتطلب عنصر تدريب كامل للمشرف قيمة مخرجات مقترنة، كما يبدو وكأنهم يستخدمون فقط ما إذا كان هذا المخزون المحدد قد ارتفع في على الرغم من أنني لا أفهم لغتهم، إلا أنهم يتحدثون عن الوسيط، وسياسيا، ويبدو أنه طريقة غريبة حقا للقيام بذلك، لماذا لا ننظر فقط في ما إذا كان السعر بعد شهر واحد أعلى أو أقل من السعر في هذه اللحظة بالذات و الناتج 1 أو 0 وفقا لذلك؟ أعتقد أن هذا ما سأفعله وأنا لا أفهم ما يقولونه. ثم أستطيع أن افترض فقط أن كل شيء قد تحولت إلى الأمام عن طريق يوم واحد في خوارزمية وتكرار لتوليد عينة أخرى. يبدو غريبا بالنسبة لي أنهم لا يستفيدون من الحجم اليومي. كان في الواقع الذهاب في تنفيذها على الجهاز المحلي في تنسورفلو باستخدام البيانات ياهو أنا تحميلها. & كوت؛ فوق الوسيط & كوت؛ يعني فقط & كوت؛ فوق متوسط ​​عوائد النسبة المئوية لكل سهم لهذا الشهر & كوت ؛. مجرد النظر في ما إذا كان السعر أعلى أو أقل من حيث القيمة المطلقة للشهر (بدلا من ما إذا كان أعلى أو أقل بالنسبة لجميع حركة الأسهم الأخرى) من المحتمل أن يكون أقل فعالية. وهي متسقة في استخدام هذا النهج النسبي، حيث أن جميع ميزات بيانات الإرجاع يتم تسجيلها z لكل خطوة زمنية لكل شهر. لقد أعيد اختبارها في زيبلين، وحتى الآن لم أتمكن من تكرار نتائجها الرائعة، ولكن ما زلت متفائلا في الوقت الحالي، لا تستخدم الشفرة البرمجية ميزة " إنكودر (سأقوم بإعادة تشفير هذا الجزء عندما أحصل على الوقت) كما أنني لا أدرب إما برنامج التشفير التلقائي أو الشبكة الكاملة لدورات كثيرة جدا على جهاز غبو واحد. كما أعتقد أنه يمكنني إضافة بيانات تاريخية للأسهم التي لم يتم العثور عليها بعد (بدلا من الأسهم القابلة للتداول حاليا والتي استخدمها ك "& كوت؛ الكون & كوت؛) التي من شأنها الحصول على نتائج أفضل لاستخراج الميزات في مرحلة التشفير التلقائي. لم يتضح لي ما إذا كانوا قد فعلوا ذلك أم لا. وبطبيعة الحال، فإن هذه البيانات التاريخية القديمة يجب أن تأتي من مصدر آخر (وليس بيانات ياهو مجانية). انهم التدريب مع البيانات من 1965-1989، التي ليست مجرد الكثير من البيانات لشبكة العصبية العميقة (وربما وسيلة إلى القديم للنموذج الناتج أن يكون لها أي قيمة عملية للتداول في الوقت الحاضر) . بالمناسبة، بدا هؤلاء الرجال لتكون قادرة على إعادة إنتاج نتائج ورقة بيضاء مع نفس ميزات الإدخال ونموذج التعلم آلة مختلفة قليلا: math.kth.se/matstat/seminarier/reports/M-exjobb15/150612a.pdf. لذلك تقريبا 53℅ التنبؤ الصحيح على الاختبار؟ في 53℅ من الحالات توقعت الشبكة الأسهم التي انتهت في النصف العلوي من العائدات في الفترة التالية؟ متشابهة جدا. لم تقدم باكتست ولكن. كما أقول أفضل بكثير من العديد من أنظمة تف طويلة الأجل. نعم، يفترض أن 52.89٪ في الورقة المشار إليها، على الرغم من أنني لا أحصل على هذه النتائج في شفرتي الخاصة (حتى الآن). نعم أنه سيئ جدا لا توجد بيانات باكتست المقدمة. هذه الخوارزمية هي بالتأكيد على المدى الطويل، والتردد المنخفض (تشغيله مرة واحدة في الشهر، وعقد المواقف الخاصة بك لمدة شهر كامل) على الرغم من أنه يمكن بالتأكيد تعديل ليكون أقصر مدة. أعتزم اللعب حولها باستخدام البيانات الدقيقة أيضا، في نهاية المطاف ومختلف الترددات التجارية على البيانات الشهرية / اليومية. ورقة تاكيتشي لم تذكر المجلد وانسحاب إما. من المرجح أن تكون عالية جدا أتصور. أيضا جميع أنواع المشاكل الأخرى مثل التحيز اعتمادا على تاريخ إعادة تخصيص والله يعرف ماذا. ولكن الاشياء المثيرة للاهتمام. شخصيا فترة عقد الشهر لا تقلق لي إذا كانت العودة حقا أن جيدة. ولكن، أن نكون صادقين، بعد سنوات من خداع نفسي أنا يضحك جميلة حول باكتستينغ أي نظام يستخدم. في تجربتي المتواضعة العائد السنوي أعلى من 15٪ إما بسبب نظرة إلى الأمام التحيز أو الإفراط في تركيب. السوق لا تسمح هذه العوائد المرتفعة لأن المتداول بالرافعة سيملكها على المدى الطويل. لذلك هؤلاء الباحثين الأكاديميين ينخدعون من قبل باكتستينغ وأشد تحذيراته التي هي & كوت؛ أي تأثير & كوت؛ على الأسعار. إذا كنت ألقي نظرة على بلدي 124 أغسطس أعلاه، وهناك ذكر للأوراق من قبل غلابادانيديس على الزخم سلسلة الأسعار التي أعلنت المنوية مع عوائد في حدود 15٪ فقط لفند مؤخرا من قبل زاكامولين لكونها نتيجة نظرة -تحيز. نحن نتحدث عن الطحالب بسيطة هنا، ولكن رمز تنفيذها في التفوق لديه نظرة المسبقة التحيز. تخيل ما يمكن أن يحدث خطأ مع الطحالب مل معقدة في هذا المجال. يمكنني استخدام اختبار منحنى الأسهم الهندسية. إذا كان يحمل، ثم احتمال وجود خلل في باكتست هو & غ؛ 95٪. مايكل هاريس، قد تكون على حق، وشكرا على محاولة إنقاذ لي من نفسي والأكاديميين غير عملي، ولكن قررت أن أكون سعيدا استنساخ هذه العائدات حتى لو معيبة. عند هذه النقطة إذا بدا أنها جيدة جدا ليكون صحيحا، وأنا محاولة لاختيار لهم لإيجاد التحيز / التطفل / الإفراط في تركيب. النقطة الرئيسية بالنسبة لي كانت حقا ممارسة في تعلم تنسورفلو وتطبيق تقنيات التعلم العميق للبيانات المالية سلسلة زمنية. وأعتقد أن هناك أنماط أن تكون مثار من هذا النوع من البيانات باستخدام نهج التعلم العميق، على الرغم من ربما النموذج القائم على الزخم أن هذا خوارزمية مل خاصة تنتج لن تكون في نهاية المطاف مربحة. الشيء العظيم في الشبكات العصبية العميقة هو أنه بمجرد أن يكون لديك تدفق البيانات الأساسية أسفل ويكون بنية الشبكة أعلن أنه من السهل لإطعام البيانات المختلفة التي تعتقد أنها قد تكون تنبؤية وإنتاج نموذج مع سلوك مختلف تماما. كما أنه من السهل نسبيا تعديل بنية الشبكة، ومن السهل جدا تعديل المقاييس لمعرفة ما إذا كانت تعطي نتائج اختبار أفضل، على الرغم من ذلك كما ذكرت، إذا فعلت بشكل غير صحيح أفهم أن هناك خطر الإفراط في تركيبها. لا يزال لدي الكثير لمعرفة المزيد عن غوتشاس، وذلك بفضل على كلمات التحذير. جاستن أسابيع، ربما كنت أسيء فهم لي، لم أعلق على عملك والجهود ولكن على الأوراق الأكاديمية مع النتائج التي لا يمكن تكرارها وحتى تحتوي على أخطاء خطيرة، الافتراض وإثبات عدم فهم الأسواق والتجارة. إذا كنت تولي اهتماما وثيقا لنتائج تلك الورقة، فإن المشكلات التالية موجودة: المحاكمات المتكررة حتى يحصل المؤلفون على نتيجة جيدة. هذا يقدم التحيز التطفل البيانات. وهي لا تعدل إحصاءاتها عن ذلك مما يدل على عدم فهم مخاطر استخراج البيانات. وكان معظم المكاسب بين عامي 1990 و 2001، وربما على المدى الطويل أكثر ملاءمة خلال أقوى الاتجاه الصعودي في تاريخ سوق الأوراق المالية، ومن قصيرة أكثر ملاءمة خلال حادث انهيار كوم دوت. لا يبلغ المؤلفون عن مقاييس مهمة، مثل السحب الأقصى ونسبة شارب ونسبة العائد. لسوء الحظ، فإن البيئة الأكاديمية تعرف كيفية خدعة المديرين التنفيذيين للشركة مع وعود من عوائد عالية والمؤلفين من أوراق مماثلة الحصول على وظائف عالية الأجر وقبل أن يتم إطلاقها أنها تتراكم ثروة جيدة على حساب المحللين الصادقين الذين لن تبلغ أبدا أرقام العائد السنوي غير واقعية وسوف تطبيق فحص الواقع للحد من التحيز التعدين البيانات. هؤلاء الناس الصادقين ليس لديهم نتائج مثيرة للإعجاب لإظهار ولكن فقط واقع وأنها لن تمر فكر الباب من بنك استثماري كبير أو صندوق التحوط. وكانت الورقة بأكملها دليلا على كيفية استخدام المر لزوم الإفراط في ملاءمة البيانات وتوليد عوائد غير واقعية في حين تحجب الحقائق. يا الله، أنت حقا يجب أن تستمع إلى مايكل. هو حق لعنة جدا. أنا أجلس هنا بين كتاب آخر - عاد الناشرين الحمقاءون لأكثر من ذلك. أردت أن يكون القسم الأول بأكمله على ما لا تفعل و قد كتب عدد قليل جدا من الفصول على حماقة الاعتماد على اختبار الظهر في التداول الاحتمالي. الناشرين يسألونني لا: القراء يريدون فقط أن نسمع ما يعمل على ما يبدو. أنا مقتنع فعلا بأن مل هو أداة مناسبة للاتجاه التالي ولكن ليس لديهم شك في أن العائد السنوي 45٪ هو السعي أحمق. على عكس مايكل أنا لا أعتقد في الاتجاهات (في الأسهم على الأقل) على الرغم من أن هناك حتى لقد خدعت وتضليل في الماضي من خلال المناسب. بعد 30 عاما في الأسواق، 15 من تلك التي تنفق إلى حد كبير على التداول المنهجي بطريقة أو بأخرى، أنا ساخرة للغاية. صندوق التحوط العالم في الغالب يجعل المال لمديري الصناديق الذين يسيرون مع رسوم ضخمة بعد انهيار أموالهم. ثم يبدأون آخر. يبدو أن لدينا جانبين للحجة: خبراء التعلم الآلي التي تعرف القليل من التداول في العالم الحقيقي، والتجار في العالم الحقيقي تفتقر إلى الخبرة في التعلم الآلي. لقد حصلت على مدمن على مل. إذا كان يمكنني تطوير خوارزميات التداول مربحة، عظيم! إن لم يكن، نوبيرميند، وهناك الكثير من الخيارات الاحتياطية تبحث لائق. أنا لا أرى أي اقتراح من خيانة الأمانة الفكرية في ورقة لي، ولكن أنا لا أوافق على أنه مزعج أن الأوراق يسمح لنشر النتائج دون دعم رمز. إذا كان أي شخص مهتم في الدردشة مل، لا البوب ​​في ## ماشينيلارنينغ على إيرينك فرينود. جوستين - شكرا لهذا الرد، والرابط! بس نظرت من خلال ورقة باتريك مرتبطة (link.springer / الفصل / 10.1007 / 978-3-319-42297-8_40)، فإنه تبدو واضحة جدا. لكن الورقة الأصلية تبدو قوية بقدر ما أرى. وسوف تستمر في محاولة لتكرار ذلك. & كوت؛ خبراء التعلم الآلي الذين يعرفون القليل من التجارة في العالم الحقيقي، والتجار في العالم الحقيقي تفتقر إلى الخبرة في التعلم الآلي. & كوت؛ وهذا يلمح إلى انقسام كاذب. ويمكن تحقيق التجارة العالمية الحقيقية من خلال مجموعة متنوعة من الأساليب بما في ذلك مل. قد لا يكون فقدان الخبرة مل العيب في كثير من الحالات لأنها يمكن أن توفر العديد من التمارين في عقم. & كوت؛ لا أرى أي اقتراح من خيانة الأمانة الفكرية في ورقة لي & كوت؛ ويتوقع أحدهم من الباحثين الجامعيين أن يكونوا على دراية بتعدين البيانات والتحايل على البيانات. وكانت الورقة حول القرصنة P مع مل. وهذا أمر مزعج بالنسبة إلى ورقة أكاديمية. كان ينبغي الإبلاغ عن العدد الدقيق للمحاكمات للوصول إلى النتيجة النهائية. ولكن هذا لا يساوي خيانة الأمانة الفكرية ولكن لتطبيق ساذجة من مل. نقطة جيدة حول رمز ولكن أظن أنه حتى لو كان لديك رمز بالضبط كنت لا تزال غير قادرة على تكرار النتائج بسبب العشوائية. فإنك لا تزال غير قادر على تكرار النتائج بسبب مؤشر ستوكاستيك. يجب أن تكون قادرة على الاقتراب. في التعلم العميق يبدو أرقام عشوائية وعادة ما تستخدم فقط لتوليد الأوزان الأولية. على الرغم من أنني واحد من أولئك الذين لديهم الكثير من الخبرة في الأسواق والقليل في مل! هذه هي معدلات الخطأ من خمس مراحل متتالية من بيرسيبترون متعدد الطبقات مع نفس المعلمات بالضبط على نفس البيانات بالضبط من مشروع أنا أعمل من أجل عميل. نعم ولكن أتساءل كيف تترجم هذه الاختلافات إلى معدل نمو سنوي مركب في نظام تجاري؟ وأتساءل عما إذا كان يجعل الكثير من الفرق أن بعض يتنبأ يقول يقول 51℅ من الأسهم بشكل صحيح كل شهر أو 52.4℅؟ تعرف على تقلبات اختبار الظهر أشك لا؟ مل هو مجرد تركيب معادلة غير الخطية مع 10S & # 39؛ إلى 1000 من المعاملات غير المحددة للبيانات. يبدو أنه سيكون من المستحيل تجنب الإفراط. في السوق تتجه صعودا أو هبوطا، وأظن خوارزميات مل. سوف تتعلم فقط قواعد الزخم. إذا كان مل سوف يعمل أعتقد أنك سوف تحتاج إلى تطبيقه على أسهم متعددة في وقت واحد، ورمي في البيانات الأساسية، والعوامل الاقتصادية الخ ثم ربما يمكن اكتشاف نمط في مجموعة من البيانات كبيرة جدا للإنسان للنظر في. الدماغ البشري هو جيد حقا في التعرف على الأنماط. إذا كان هناك نمط في تاريخ سعر سهم واحد. أعتقد أنك سوف نرى ذلك. مجرد ملاحظة أن جزءا رئيسيا من ورقة تاكيوتشي / لي هو & كوت؛ ستاتيوناريز & كوت؛ البيانات من خلال تحويلها إلى شكل مستعرضة. & كوت؛ نحسب سلسلة من 12 عائد تراكمي باستخدام العوائد الشهرية و 20 عائد تراكمي. باستخدام العوائد اليومية. ونلاحظ أن زخم الأسعار. هي ظاهرة مستعرضة مع الفائزين. مع ارتفاع العائدات السابقة والخاسرين الذين لديهم عوائد سابقة منخفضة. بالنسبة للأسهم الأخرى. وهكذا نحن تطبيع كل. من العائدات التراكمية بحساب النسب النسبية للزاوية. إلى المقطع العرضي لجميع الأسهم لكل شهر. If the statistics isn't stationary, the model will not converge or if it manages to converge (mathematically) it's not going to be very useful. David, I think they just kept on trying things until they got an impressive result. This is the definition of data-mining bias mainly driven by data-snooping. Nowhere in their paper there is a reference to data-mining bias. Michael, data snooping is definitely a possibility. However, the set-up seems quite plausible for fairly good results.. maybe not 50% returns but maybe up to 20% in a "normal" year. I know you spoke of 15% but I am optimistic perhaps naively. Normal year: the paper didn't talk about more interesting things such as (macro) regime switching that might affect the test results. For instance momentum behaviour can be wildly different in the last quarter of 2008 vs. second through fourth quarter of 2009. If your test covers or misses 2008, it could change results. A likely place data-snooping gets into the set-up (unless the authors actually kept trying with different setup) is the hold-out cross validation portion. In my experience this is where "leakage" can inadvertently be introduced into the system. By leakage I mean a leakage of future data. The authors never provided the details on the hold-out x-val but if they were not careful with how they created the test set or sets for the hold-out cross-validation, they probably committed the same mistakes in training the finished product. Here's a Kaggle page on leakage: From another platform's CEO: [Many of those algorithms were developed by students using sophisticated machine learning methods like neural networks. “I’m impressed by the quality and stability of the trading algorithms". ] Deep learning appears very important to stay competitive. "If you have offline access to relevant trading data, you could train a net from that on non-quantopian machines and then translate the resulting net to scipy for execution in the quantopian framework." Is it that, I could execute in the quantopian framework but would be unable to join the contest? I have the relevant data. I am looking for ways to get some paper trade track record. I could use Interactive Brokers' paper trading but it is costly to have many IB accounts. Greg - thanks for the info. "It runs quite slow. To speed things up you may want to download price data from EOData (or other site) and work from that on your own machine." After working with outside data and own machine, is there any short cut to change back the codes to upload back smoothly to quantopian? "[Many of those algorithms were developed by students using sophisticated machine learning methods like neural networks. “I’m impressed by the quality and stability of the trading algorithms". ] ومثل. But the assumption is that he does not know the algorithms. or am I missing something. Maybe the next market regime change will sort things out. [deleted - see below post from Antony] "Next market regime change", you mean, when some platforms cannot survive? Very interesting anyway. I mean when market dynamics change, all the over-fitted ML systems will fail. More information about the significance problem can be found in my paper: papers.ssrn/sol3/papers.cfm?abstract_id=2810170. For now the impact of these competitions is small. Market regime changes are driven by structural changes (algo trading in the late 1990s, decimalization, then HFT, etc.). In my opinion ensemble results are random priceactionlab/Blog/2016/09/data-science/ There is no way of distinguishing a low log loss due to multiple trials from a statistical significant one. These competitions are doomed in my opinion as more entrants mean further convergences of the sample mean to 0 true mean. Plus they have short-term risk of ruin which is uncontrollable, although small. The key to profits is identifying one or two robust features for the current regime and use those in simple algo. All else translates to more bias, more noise, more risk. I think this thread has drifted off topic. If that is the case, could those responsible please create new threads & migrate accordingly? I would like to remain subscribed to this thread but only receive notifications that pertain to the original subject. I can implimentation that, working on Indian market, my interest is more on minute or five minute data. Also there can be far better use of deepnet if you combine this along with self learned patterns. Anybody has experience on how to put the trained network into production? To be more specific, how to save the trained model and use it in the real time trading environment. شكر. Just doing that with my own machine learning algos on the VIX futures contracts. I will report back when done. But I won't be using it on Q or drafting it in Q since I use daily prices, futures contracts and a different python back testing engine. I've been looking to back-test this since a long time. Finally, I took a stab at it. Here are my results (and settings): Total no. of tickers: 2,585. Exchange: NYSE and NASDAQ. Date range: 2012-02-21 to 2016-11-29. Business days: 1,203. Train data: From start until 2015-12-31. Test data: From 2016-01-01 to end. Neural Network (Encoder-decoder) • Architecture o (#nodes in each hidden layers): (33 i/p)-40-4-40-(33 o/p) o Activation function for hidden layers: ReLu. o Activation function for output: Linear. • Optimization o batch_size=100,000. o Optimizer: Adam (learning rate: 0.001) o Loss function: mse. • Performance (on training set) o Loss after 100 epochs = 0.1505. Neural Network (Classifier) • Architecture o (#nodes in each hidden layers): (4 i/p)->20->(1 o/p) o Activation function for hidden layers: ReLu. o Activation function for output: Sigmoid. • Optimization o batch_size=100,000. o Optimizer: Adam (learning rate: 0.01) o Loss function: binary_crossentropy. o Regularization: 40% dropout in hidden layer. • Performance (on training set) o Loss after 100 epochs = 0.6926. o Accuracy (classification rate): 0.5141. • Performance (on test set) o Accuracy (classification rate): 0.4844. • Return (long top decile and short bottom decile) = -1.66% (annualized). I used Quandl data (EOD dataset) to construct the 13 features as suggested in the paper. I used different learning rates and regularization approaches but, results do not differ drastically. Interstingly, a naive approach to go long (on every stock) in given period yields +19.34% return. This is not surprising since the test period is 2016, and the market grew at an equivalent rate. Looking forward to your thoughts. I like your blogs but I think you are missing something for ML algo now. It can be adaptive if you are using rolling window with weights to retrain. That is the same process as we human being relearning the new environment. DNN may need more data but other ML algos might still be useful. The method in the paper might have "overfitted" the strategy in picking up the network architecture but as they are not directly optimizing on the final PnL I think the "overfitting" problem would be less severe than the normal trading system optimization on the final PnL/Sharpe/Sortino. I have carried out similar experiments on US stocks and I think your training size is a little bit too small. Nevertheless, the system is not doing very well since 2016 in my setups even if I have used cross-validation to tune the nn/ML structure. The best period in my test period(2000-2017) was right after the tech bubble which corroborates figure 4 in the Stanford paper. Post-2000, my monthly return is much lower( 20% CAGR, 1.6 Sharpe, 16% MaxDD) than the # reported in the paper partly because of using only post-2000 in the test sample. Adding more data may not help, since. Currently, training data has close to 3 million obs. I see your point but I think the original paper was forecasting monthly returns instead of daily returns so you would only have 2500*12*5=150K data point. مع. half for training, you "only" have. 75K data for a deep NN which might be too small? I guess your usage of forecasting daily returns versus monthly returns might explain why your test resulted in a negative CAGR while mine is still positive albeit much smaller than in the paper. I, too, forecast monthly returns but, I do not constrain constructing the features for just 1st day of every month. I construct them for every day. This way I have 2,515*1,203= 3M obs. When computing PnL however, I choose a particular day of the month to invest/close a position. I acknowledge that this way consecutive days will not have much variation in input features/outcome. Nonetheless, I'll try training on more isolated dates (one each month) as you suggested. عذرا، هناك خطأ ما. حاول مرة أخرى أو اتصل بنا عن طريق إرسال الملاحظات. لقد أرسلت بنجاح تذكرة دعم. سيكون فريق الدعم لدينا على اتصال قريبا. يتم توفير المواد على هذا الموقع لأغراض إعلامية فقط ولا تشكل عرضا لبيع أو طلب شراء أو توصية أو تأييد لأي أمن أو استراتيجية، كما أنها لا تشكل عرضا لتقديم الخدمات الاستشارية الاستثمارية من قبل كوانتوبيان. وبالإضافة إلى ذلك، لا تقدم المادة أي رأي فيما يتعلق بملاءمة أي ضمان أو استثمار محدد. لا ینبغي اعتبار أي معلومات واردة في ھذه الوثیقة بمثابة اقتراح للانخراط في أي مسار عمل یتعلق بالاستثمار أو الامتناع عنھ حیث لا یقوم أي من کوانتوبيان أو أي من الشرکات التابعة لھ بتقدیم المشورة الاستثماریة أو العمل کمستشار لأي خطة أو کیان خاضع ل وقانون تأمين دخل التقاعد للموظفين لعام 1974، بصيغته المعدلة، أو حساب التقاعد الفردي أو المعاش التقاعدي الفردي، أو تقديم المشورة بصفة الأمانة فيما يتعلق بالمواد المعروضة في هذه الوثيقة. إذا كنت مستقلا فرديا أو مستثمرا آخر، فاتصل بمستشارك المالي أو أي جهة مالية أخرى لا علاقة لها بكوانتوبيان حول ما إذا كانت أي فكرة استثمار أو إستراتيجية أو منتج أو خدمة معينة مذكورة هنا قد تكون مناسبة لظروفك. وتشمل جميع الاستثمارات مخاطر، بما في ذلك خسارة أصل الدين. لا تقدم كوانتوبيان أي ضمانات بشأن دقة أو اكتمال الآراء المعرب عنها في الموقع. وتخضع اآلراء للتغيير، وقد تصبح غير موثوقة ألسباب مختلفة، بما في ذلك التغيرات في ظروف السوق أو الظروف االقتصادية. يتم توفير المواد على هذا الموقع لأغراض إعلامية فقط ولا تشكل عرضا لبيع أو طلب شراء أو توصية أو تأييد لأي أمن أو استراتيجية، كما أنها لا تشكل عرضا لتقديم الخدمات الاستشارية الاستثمارية من قبل كوانتوبيان. وبالإضافة إلى ذلك، لا تقدم المادة أي رأي فيما يتعلق بملاءمة أي ضمان أو استثمار محدد. لا ینبغي اعتبار أي معلومات واردة في ھذه الوثیقة بمثابة اقتراح للانخراط في أي مسار عمل یتعلق بالاستثمار أو الامتناع عنھ حیث لا یقوم أي من کوانتوبيان أو أي من الشرکات التابعة لھ بتقدیم المشورة الاستثماریة أو العمل کمستشار لأي خطة أو کیان خاضع ل وقانون تأمين دخل التقاعد للموظفين لعام 1974، بصيغته المعدلة، أو حساب التقاعد الفردي أو المعاش التقاعدي الفردي، أو تقديم المشورة بصفة الأمانة فيما يتعلق بالمواد المعروضة في هذه الوثيقة. إذا كنت مستقلا فرديا أو مستثمرا آخر، فاتصل بمستشارك المالي أو أي جهة مالية أخرى لا علاقة لها بكوانتوبيان حول ما إذا كانت أي فكرة استثمار أو إستراتيجية أو منتج أو خدمة معينة مذكورة هنا قد تكون مناسبة لظروفك. وتشمل جميع الاستثمارات مخاطر، بما في ذلك خسارة أصل الدين. لا تقدم كوانتوبيان أي ضمانات بشأن دقة أو اكتمال الآراء المعرب عنها في الموقع. وتخضع اآلراء للتغيير، وقد تصبح غير موثوقة ألسباب مختلفة، بما في ذلك التغيرات في ظروف السوق أو الظروف االقتصادية. بناء استراتيجيات أفضل! Part 4: Machine Learning. وكان الأزرق العميق أول كمبيوتر فاز في بطولة العالم الشطرنج. That was 1996, and it took 20 years until another program, AlphaGo , could defeat the best human Go player. كان الأزرق العميق نظاما يستند إلى نموذج مع قواعد الشطرنج هاردويريد. ألفاغو هو نظام استخراج البيانات، شبكة العصبية العميقة المدربة مع الآلاف من الألعاب الذهاب. لم تتحسن الأجهزة، ولكن انفراجة في البرنامج كان ضروريا لخطوة من الضرب كبار لاعبي الشطرنج لضرب كبار اللاعبين الذهاب. في هذا الجزء 4 من سلسلة مصغرة نحن & # 8217؛ سوف ننظر في نهج استخراج البيانات لتطوير استراتيجيات التداول. هذا الأسلوب لا يهتم بآليات السوق. It just scans price curves or other data sources for predictive patterns. Machine learning or “Artificial Intelligence” لا تشارك دائما في استراتيجيات استخراج البيانات. In fact the most popular – and surprisingly profitable – data mining method works without any fancy neural networks or support vector machines. Machine learning principles. A learning algorithm is fed with data samples , normally derived in some way from historical prices. Each sample consists of n variables x 1 .. x n , named predictors or features . The predictors can be the price returns of the last n bars, or a collection of classical indicators, or any other imaginable functions of the price curve (I’ve even seen the pixels of a price chart image used as predictors for a neural network!). Each sample also normally includes a target variable y , like the return of the next trade after taking the sample, or the next price movement. In a training process , the algorithm learns to predict the target y from the predictors x 1 .. x n . The learned ‘memory’ is stored in a data structure named model that is specific to the algorithm. Such a model can be a function with prediction rules in C code, generated by the training process. Or it can be a set of connection weights of a neural network. The predictors must carry information sufficient to predict the target y with some accuracy. They m ust also often fulfill two formal requirements. First, all predictor values should be in the same range, like -1 .. +1 (for most R algorithms) or -100 .. +100 (for Zorro or TSSB algorithms). So you need to normalize them in some way before sending them to the machine. Second, the samples should be balanced , i.e. equally distributed over all values of the target variable. So there should be about as many winning as losing samples. If you do not observe these two requirements, you’ll wonder why you’re getting bad results from the machine learning algorithm. Regression algorithms predict a numeric value, like the magnitude and sign of the next price move. Classification algorithms predict a qualitative sample class, for instance whether it’s preceding a win or a loss. Some algorithms, such as neural networks, decision trees, or support vector machines, can be run in both modes. A few algorithms learn to divide samples into classes without needing any target y . That’s unsupervised learning , as opposed to supervised learning using a target. Somewhere inbetween is reinforcement learning , where the system trains itself by running simulations with the given features, and using the outcome as training target. AlphaZero, the successor of AlphaGo, used reinforcement learning by playing millions of Go games against itself. In finance there are few applications for unsupervised or reinforcement learning. 99% of machine learning strategies use supervised learning. Whatever signals we’re using for predictors in finance, they will most likely contain much noise and little information, and will be nonstationary on top of it. Therefore financial prediction is one of the hardest tasks in machine learning. More complex algorithms do not necessarily achieve better results. The selection of the predictors is critical to the success. It is no good idea to use lots of predictors, since this simply causes overfitting and failure in out of sample operation. Therefore data mining strategies often apply a preselection algorithm that determines a small number of predictors out of a pool of many. The preselection can be based on correlation between predictors, on significance, on information content, or simply on prediction success with a test set. Practical experiments with feature selection can be found in a recent article on the Robot Wealth blog. Here’s a list of the most popular data mining methods used in finance. 1. Indicator soup. Most trading systems we’re programming for clients are not based on a financial model. The client just wanted trade signals from certain technical indicators, filtered with other technical indicators in combination with more technical indicators. When asked how this hodgepodge of indicators could be a profitable strategy, he normally answered: “Trust me. I’m trading it manually, and it works.” It did indeed. At least sometimes. Although most of those systems did not pass a WFA test (and some not even a simple backtest), a surprisingly large number did. And those were also often profitable in real trading. The client had systematically experimented with technical indicators until he found a combination that worked in live trading with certain assets. This way of trial-and-error technical analysis is a classical data mining approach, just executed by a human and not by a machine. I can not really recommend this method – and a lot of luck, not to speak of money, is probably involved – but I can testify that it sometimes leads to profitable systems. 2. Candle patterns. Not to be confused with those Japanese Candle Patterns that had their best-before date long, long ago. The modern equivalent is price action trading . You’re still looking at the open, high, low, and close of candles. You’re still hoping to find a pattern that predicts a price direction. But you’re now data mining contemporary price curves for collecting those patterns. There are software packages for that purpose. They search for patterns that are profitable by some user-defined criterion, and use them to build a specific pattern detection function. It could look like this one (from Zorro’s pattern analyzer): This C function returns 1 when the signals match one of the patterns, otherwise 0. You can see from the lengthy code that this is not the fastest way to detect patterns. A better method, used by Zorro when the detection function needs not be exported, is sorting the signals by their magnitude and checking the sort order. An example of such a system can be found here. Can price action trading really work? Just like the indicator soup, it’s not based on any rational financial model. One can at best imagine that sequences of price movements cause market participants to react in a certain way, this way establishing a temporary predictive pattern. However the number of patterns is quite limited when you only look at sequences of a few adjacent candles. The next step is comparing candles that are not adjacent, but arbitrarily selected within a longer time period. This way you’re getting an almost unlimited number of patterns – but at the cost of finally leaving the realm of the rational. It is hard to imagine how a price move can be predicted by some candle patterns from weeks ago. Still, a lot effort is going into that. A fellow blogger, Daniel Fernandez, runs a subscription website (Asirikuy) specialized on data mining candle patterns. He refined pattern trading down to the smallest details, and if anyone would ever achieve any profit this way, it would be him. But to his subscribers’ disappointment, trading his patterns live (QuriQuant) produced very different results than his wonderful backtests. If profitable price action systems really exist, apparently no one has found them yet. 3. Linear regression. The simple basis of many complex machine learning algorithms: Predict the target variable y by a linear combination of the predictors x 1 .. x n . The coefficients a n are the model. They are calculated for minimizing the sum of squared differences between the true y values from the training samples and their predicted y from the above formula: For normal distributed samples, the minimizing is possible with some matrix arithmetic, so no iterations are required. In the case n = 1 – with only one predictor variable x – the regression formula is reduced to. which is simple linear regression , as opposed to multivariate linear regression where n > 1 . Simple linear regression is available in most trading platforms, f.i. with the LinReg indicator in the TA-Lib. With y = price and x = time it’s often used as an alternative to a moving average. Multivariate linear regression is available in the R platform through the lm(..) function that comes with the standard installation. A variant is polynomial regression . Like simple regression it uses only one predictor variable x , but also its square and higher degrees, so that x n == x n : With n = 2 or n = 3 , polynomial regression is often used to predict the next average price from the smoothed prices of the last bars. The polyfit function of MatLab, R, Zorro, and many other platforms can be used for polynomial regression. 4. Perceptron. Often referred to as a neural network with only one neuron. In fact a perceptron is a regression function like above, but with a binary result, thus called logistic regression . It’s not regression though, it’s a classification algorithm. Zorro’s advise(PERCEPTRON, …) function generates C code that returns either 100 or -100, dependent on whether the predicted result is above a threshold or not: You can see that the sig array is equivalent to the features x n in the regression formula, and the numeric factors are the coefficients a n . 5. N eural networks. Linear or logistic regression can only solve linear problems. Many do not fall into this category – a famous example is predicting the output of a simple XOR function. And most likely also predicting prices or trade returns. An artificial neural network (ANN) can tackle nonlinear problems. It’s a bunch of perceptrons that are connected together in an array of layers . Any perceptron is a neuron of the net. Its output goes to the inputs of all neurons of the next layer, like this: Like the perceptron, a neural network also learns by determining the coefficients that minimize the error between sample prediction and sample target. But this requires now an approximation process, normally with backpropagating the error from the output to the inputs, optimizing the weights on its way. This process imposes two restrictions. First, the neuron outputs must now be continuously differentiable functions instead of the simple perceptron threshold. Second, the network must not be too deep – it must not have too many ‘hidden layers’ of neurons between inputs and output. This second restriction limits the complexity of problems that a standard neural network can solve. When using a neural network for predicting trades, you have a lot of parameters with which you can play around and, if you’re not careful, produce a lot of selection bias : Number of hidden layers Number of neurons per hidden layer Number of backpropagation cycles, named epochs Learning rate, the step width of an epoch Momentum, an inertia factor for the weights adaption Activation function. The activation function emulates the perceptron threshold. For the backpropagation you need a continuously differentiable function that generates a ‘soft’ step at a certain x value. Normally a sigmoid , tanh , or softmax function is used. Sometimes it’s also a linear function that just returns the weighted sum of all inputs. In this case the network can be used for regression, for predicting a numeric value instead of a binary outcome. Neural networks are available in the standard R installation ( nnet , a single hidden layer network) and in many packages, for instance RSNNS and FCNN4R . 6. Deep learning. Deep learning methods use neural networks with many hidden layers and thousands of neurons, which could not be effectively trained anymore by conventional backpropagation. Several methods became popular in the last years for training such huge networks. They usually pre-train the hidden neuron layers for achieving a more effective learning process. A Restricted Boltzmann Machine ( RBM ) is an unsupervised classification algorithm with a special network structure that has no connections between the hidden neurons. A Sparse Autoencoder ( SAE ) uses a conventional network structure, but pre-trains the hidden layers in a clever way by reproducing the input signals on the layer outputs with as few active connections as possible. Those methods allow very complex networks for tackling very complex learning tasks. Such as beating the world’s best human Go player. Deep learning networks are available in the deepnet and darch R packages. Deepnet provides an autoencoder, Darch a restricted Boltzmann machine. I have not yet experimented with Darch, but here’s an example R script using the Deepnet autoencoder with 3 hidden layers for trade signals through Zorro’s neural() function: 7. Support vector machines. Like a neural network, a support vector machine (SVM) is another extension of linear regression. When we look at the regression formula again, we can interpret the features x n as coordinates of a n -dimensional feature space . Setting the target variable y to a fixed value determines a plane in that space, called a hyperplane since it has more than two (in fact, n-1 ) dimensions. The hyperplane separates the samples with y > o from the samples with y < 0 . The a n coefficients can be calculated in a way that the distances of the plane to the nearest samples – which are called the ‘support vectors’ of the plane, hence the algorithm name – is maximum. This way we have a binary classifier with optimal separation of winning and losing samples. The problem: normally those samples are not linearly separable – they are scattered around irregularly in the feature space. No flat plane can be squeezed between winners and losers. If it could, we had simpler methods to calculate that plane, f.i. linear discriminant analysis . But for the common case we need the SVM trick: Adding more dimensions to the feature space. For this the SVM algorithm produces more features with a kernel function that combines any two existing predictors to a new feature. This is analogous to the step above from the simple regression to polynomial regression, where also more features are added by taking the sole predictor to the n-th power. The more dimensions you add, the easier it is to separate the samples with a flat hyperplane. This plane is then transformed back to the original n-dimensional space, getting wrinkled and crumpled on the way. By clever selecting the kernel function, the process can be performed without actually computing the transformation. Like neural networks, SVMs can be used not only for classification, but also for regression. They also offer some parameters for optimizing and possibly overfitting the prediction process: Kernel function. You normally use a RBF kernel (radial basis function, a symmetric kernel), but you also have the choice of other kernels, such as sigmoid, polynomial, and linear. Gamma, the width of the RBF kernel Cost parameter C, the ‘penalty’ for wrong classifications in the training samples. An often used SVM is the libsvm library. It’s also available in R in the e1071 package. In the next and final part of this series I plan to describe a trading strategy using this SVM. 8. K-Nearest neighbor. Compared with the heavy ANN and SVM stuff, that’s a nice simple algorithm with a unique property: It needs no training. So the samples are the model. You could use this algorithm for a trading system that learns permanently by simply adding more and more samples. The nearest neighbor algorithm computes the distances in feature space from the current feature values to the k nearest samples. A distance in n-dimensional space between two feature sets (x 1 .. x n ) and (y 1 .. y n ) is calculated just as in 2 dimensions: The algorithm simply predicts the target from the average of the k target variables of the nearest samples, weighted by their inverse distances. It can be used for classification as well as for regression. Software tricks borrowed from computer graphics, such as an adaptive binary tree (ABT), can make the nearest neighbor search pretty fast. In my past life as computer game programmer, we used such methods in games for tasks like self-learning enemy intelligence. You can call the knn function in R for nearest neighbor prediction – or write a simple function in C for that purpose. This is an approximation algorithm for unsupervised classification. It has some similarity, not only its name, to k-nearest neighbor. For classifying the samples, the algorithm first places k random points in the feature space. Then it assigns to any of those points all the samples with the smallest distances to it. The point is then moved to the mean of these nearest samples. This will generate a new samples assignment, since some samples are now closer to another point. The process is repeated until the assignment does not change anymore by moving the points, i.e. each point lies exactly at the mean of its nearest samples. We now have k classes of samples, each in the neighborhood of one of the k points. This simple algorithm can produce surprisingly good results. In R, the kmeans function does the trick. An example of the k-means algorithm for classifying candle patterns can be found here: Unsupervised candlestick classification for fun and profit. 10. Naive Bayes. This algorithm uses Bayes’ Theorem for classifying samples of non-numeric features (i.e. events ), such as the above mentioned candle patterns . Suppose that an event X (for instance, that the Open of the previous bar is below the Open of the current bar) appears in 80% of all winning samples. What is then the probability that a sample is winning when it contains event X ? It’s not 0.8 as you might think. The probability can be calculated with Bayes’ Theorem: P(Y|X) is the probability that event Y (f.i. winning) occurs in all samples containing event X (in our example, Open(1) < Open(0) ). According to the formula, it is equal to the probability of X occurring in all winning samples (here, 0.8), multiplied by the probability of Y in all samples (around 0.5 when you were following my above advice of balanced samples) and divided by the probability of X in all samples. If we are naive and assume that all events X are independent of each other, we can calculate the overall probability that a sample is winning by simply multiplying the probabilities P (X|winning) for every event X . This way we end up with this formula: with a scaling factor s . For the formula to work, the features should be selected in a way that they are as independent as possible, which imposes an obstacle for using Naive Bayes in trading. For instance, the two events Close(1) < Close(0) and Open(1) < Open(0) are most likely not independent of each other. Numerical predictors can be converted to events by dividing the number into separate ranges. The Naive Bayes algorithm is available in the ubiquitous e1071 R package. 11. Decision and regression trees. Those trees predict an outcome or a numeric value based on a series of yes/no decisions, in a structure like the branches of a tree. Any decision is either the presence of an event or not (in case of non-numerical features) or a comparison of a feature value with a fixed threshold. A typical tree function, generated by Zorro’s tree builder, looks like this: How is such a tree produced from a set of samples? There are several methods; Zorro uses the Shannon i nformation entropy , which already had an appearance on this blog in the Scalping article. At first it checks one of the features, let’s say x 1 . It places a hyperplane with the plane formula x 1 = t into the feature space. This hyperplane separates the samples with x 1 > t from the samples with x 1 < t . The dividing threshold t is selected so that the information gain – the difference of information entropy of the whole space, to the sum of information entropies of the two divided sub-spaces – is maximum. This is the case when the samples in the subspaces are more similar to each other than the samples in the whole space. This process is then repeated with the next feature x 2 and two hyperplanes splitting the two subspaces. Each split is equivalent to a comparison of a feature with a threshold. By repeated splitting, we soon get a huge tree with thousands of threshold comparisons. Then the process is run backwards by pruning the tree and removing all decisions that do not lead to substantial information gain. Finally we end up with a relatively small tree as in the code above. Decision trees have a wide range of applications. They can produce excellent predictions superior to those of neural networks or support vector machines. But they are not a one-fits-all solution, since their splitting planes are always parallel to the axes of the feature space. This somewhat limits their predictions. They can be used not only for classification, but also for regression, for instance by returning the percentage of samples contributing to a certain branch of the tree. Zorro’s tree is a regression tree. The best known classification tree algorithm is C5.0 , available in the C50 package for R. For improving the prediction even further or overcoming the parallel-axis-limitation, an ensemble of trees can be used, called a random forest . The prediction is then generated by averaging or voting the predictions from the single trees. Random forests are available in R packages randomForest , ranger and Rborist . استنتاج. There are many different data mining and machine learning methods at your disposal. The critical question: what is better, a model-based or a machine learning strategy? There is no doubt that machine learning has a lot of advantages. You don’t need to care about market microstructure, economy, trader psychology, or similar soft stuff. You can concentrate on pure mathematics. Machine learning is a much more elegant, more attractive way to generate trade systems. It has all advantages on its side but one. Despite all the enthusiastic threads on trader forums, it tends to mysteriously fail in live trading. Every second week a new paper about trading with machine learning methods is published (a few can be found below). Please take all those publications with a grain of salt. According to some papers, phantastic win rates in the range of 70%, 80%, or even 85% have been achieved. Although win rate is not the only relevant criterion – you can lose even with a high win rate – 85% accuracy in predicting trades is normally equivalent to a profit factor above 5. With such a system the involved scientists should be billionaires meanwhile. Unfortunately I never managed to reproduce those win rates with the described method, and didn’t even come close. So maybe a lot of selection bias went into the results. Or maybe I’m just too stupid. Compared with model based strategies, I’ve seen not many successful machine learning systems so far. And from what one hears about the algorithmic methods by successful hedge funds, machine learning seems still rarely to be used. But maybe this will change in the future with the availability of more processing power and the upcoming of new algorithms for deep learning. Classification using deep neural networks: Dixon.et.al.2016 Predicting price direction using ANN & SVM: Kara.et.al.2011 Empirical comparison of learning algorithms: Caruana.et.al.2006 Mining stock market tendency using GA & SVM: Yu.Wang.Lai.2005. The next part of this series will deal with the practical development of a machine learning strategy. 30 thoughts on “Build Better Strategies! Part 4: Machine Learning” مشاركة لطيفة. There is a lot of potential in these approach towards the market. Btw are you using the code editor which comes with zorro? how is it possible to get such a colour configuration? The colorful script is produced by WordPress. You can’t change the colors in the Zorro editor, but you can replace it with other editors that support individual colors, for instance Notepad++. Is it then possible that notepad detects the zorro variables in the scripts? I mean that BarPeriod is remarked as it is with the zorro editor? Theoretically yes, but for this you had to configure the syntax highlighting of Notepad++, and enter all variables in the list. As far as I know Notepad++ can also not be configured to display the function description in a window, as the Zorro editor does. There’s no perfect tool… Concur with the final paragraph. I have tried many machine learning techniques after reading various ‘peer reviewed’ papers. But reproducing their results remains elusive. When I live test with ML I can’t seem to outperform random entry. ML fails in live? Maybe the training of the ML has to be done with price data that include as well historical spread, roll, tick and so on? I think reason #1 for live failure is data mining bias, caused by biased selection of inputs and parameters to the algo. Thanks to the author for the great series of articles. However, it should be noted that we don’t need to narrow our view with predicting only the next price move. It may happen that the next move goes against our trade in 70% of cases but it still worth making a trade. This happens when the price finally does go to the right direction but before that it may make some steps against us. If we delay the trade by one price step we will not enter the mentioned 30% of trades but for that we will increase the result of the remained 70% by one price step. So the criteria is which value is higher: N*average_result or 0.7*N*(avergae_result + price_step). مشاركة لطيفة. If you just want to play around with some machine learning, I implemented a very simple ML tool in python and added a GUI. It’s implemented to predict time series. Thanks JCL I found very interesting your article. I would like to ask you, from your expertise in trading, where can we download reliable historical forex data? I consider it very important due to the fact that Forex market is decentralized. شكرا مقدما! There is no really reliable Forex data, since every Forex broker creates their own data. They all differ slightly dependent on which liquidity providers they use. FXCM has relatively good M1 and tick data with few gaps. You can download it with Zorro. Thanks for writing such a great article series JCL… a thoroughly enjoyable read! I have to say though that I don’t view model-based and machine learning strategies as being mutually exclusive; I have had some OOS success by using a combination of the elements you describe. To be more exact, I begin the system generation process by developing a ‘traditional’ mathematical model, but then use a set of online machine learning algorithms to predict the next terms of the various different time series (not the price itself) that are used within the model. The actual trading rules are then derived from the interactions between these time series. So in essence I am not just blindly throwing recent market data into an ML model in an effort to predict price action direction, but instead develop a framework based upon sound investment principles in order to point the models in the right direction. I then data mine the parameters and measure the level of data-mining bias as you’ve described also. It’s worth mentioning however that I’ve never had much success with Forex. Anyway, best of luck with your trading and keep up the great articles! Thanks for posting this great mini series JCL. I recently studied a few latest papers about ML trading, deep learning especially. Yet I found that most of them valuated the results without risk-adjusted index, i.e., they usually used ROC curve, PNL to support their experiment instead of Sharpe Ratio, for example. Also, they seldom mentioned about the trading frequency in their experiment results, making it hard to valuate the potential profitability of those methods. لماذا هذا؟ Do you have any good suggestions to deal with those issues? ML papers normally aim for high accuracy. Equity curve variance is of no interest. This is sort of justified because the ML prediction quality determines accuracy, not variance. Of course, if you want to really trade such a system, variance and drawdown are important factors. A system with lower accuracy and worse prediction can in fact be preferable when it’s less dependent on market condictions. “In fact the most popular – and surprisingly profitable – data mining method works without any fancy neural networks or support vector machines.” Would you please name those most popular & surprisingly profitable ones. So I could directly use them. I was referring to the Indicator Soup strategies. For obvious reasons I can’t disclose details of such a strategy, and have never developed such systems myself. We’re merely coding them. But I can tell that coming up with a profitable Indicator Soup requires a lot of work and time. Well, i am just starting a project which use simple EMAs to predict price, it just select the correct EMAs based on past performance and algorithm selection that make some rustic degree of intelligence. Jonathan.orregogmail offers services as MT4 EA programmer. شكرا على writeup جيد. It in reality used to be a leisure account it. Look complicated to more delivered agreeable from you! By the way, how could we be in contact? There are following issues with ML and with trading systems in general which are based on historical data analysis: 1) Historical data doesn’t encode information about future price movements. Future price movement is independent and not related to the price history. There is absolutely no reliable pattern which can be used to systematically extract profits from the market. Applying ML methods in this domain is simply pointless and doomed to failure and is not going to work if you search for a profitable system. Of course you can curve fit any past period and come up with a profitable system for it. The only thing which determines price movement is demand and supply and these are often the result of external factors which cannot be predicted. For example: a war breaks out somewhere or other major disaster strikes or someone just needs to buy a large amount of a foreign currency for some business/investment purpose. These sort of events will cause significant shifts in the demand supply structure of the FX market . As a consequence, prices begin to move but nobody really cares about price history just about the execution of the incoming orders. An automated trading system can only be profitable if it monitors a significant portion of the market and takes the supply and demand into account for making a trading decision. But this is not the case with any of the systems being discussed here. 2) Race to the bottom. Even if (1) wouldn’t be true and there would be valuable information encoded in historical price data, you would still face following problem: there are thousands of gold diggers out there, all of them using similar methods and even the same tools to search for profitable systems and analyze the same historical price data. As a result, many of them will discover the same or very similar “profitable” trading systems and when they begin actually trading those systems, they will become less and less profitable due to the nature of the market. The only sure winners in this scenario will be the technology and tool vendors. I will be still keeping an eye on your posts as I like your approach and the scientific vigor you apply. Your blog is the best of its kind – keep the good work! One hint: there are profitable automated systems, but they are not based on historical price data but on proprietary knowledge about the market structure and operations of the major institutions which control these markets. Let’s say there are many inefficiencies in the current system but you absolutely have no chance to find the information about those by analyzing historical price data. Instead you have to know when and how the institutions will execute market moving orders and front run them. Thanks for the extensive comment. I often hear these arguments and they sound indeed intuitive, only problem is that they are easily proven wrong. The scientific way is experiment, not intuition. Simple tests show that past and future prices are often correlated – otherwise every second experiment on this blog had a very different outcome. Many successful funds, for instance Jim Simon’s Renaissance fund, are mainly based on algorithmic prediction. One more thing: in my comment I have been implicitly referring to the buy side (hedge funds, traders etc) not to the sell side (market makers, banks). The second one has always the edge because they sell at the ask and buy at the bid, pocketing the spread as an additional profit to any strategy they might be running. Regarding Jim Simon’s Renaissance: I am not so sure if they have not transitioned over the time to the sell side in order to stay profitable. There is absolutely no information available about the nature of their business besides the vague statement that they are using solely quantitative algorithmic trading models… Thanks for the informative post! Regarding the use of some of these algorithms, a common complaint which is cited is that financial data is non-stationary…Do you find this to be a problem? Couldn’t one just use returns data instead which is (I think) stationary? Yes, this is a problem for sure. If financial data were stationary, we’d all be rich. I’m afraid we have to live with what it is. Returns are not any more stationary than other financial data. Hello sir, I developed some set of rules for my trading which identifies supply demand zones than volume and all other criteria. Can you help me to make it into automated system ?? If i am gonna do that myself then it can take too much time. Please contact me at svadukiagmail if you are interested. Sure, please contact my employer at infoopgroup.de. They’ll help. لقد لاحظت أنك لا تستثمر صفحتك، ولا تهدر حركة المرور، يمكنك كسب مبالغ إضافية كل شهر لأنك حصلت على محتوى عالي الجودة. If you want to know how to make extra $$$, search for: Mrdalekjd methods for $$$ Technical analysis has always been rejected and looked down upon by quants, academics, or anyone who has been trained by traditional finance theories. I have worked for proprietary trading desk of a first tier bank for a good part of my career, and surrounded by those ivy-league elites with background in finance, math, or financial engineering. I must admit none of those guys knew how to trade directions. They were good at market making, product structures, index arb, but almost none can making money trading directions. لماذا ا؟ Because none of these guys believed in technical analysis. Then again, if you are already making your millions why bother taking the risk of trading direction with your own money. For me luckily my years of training in technical analysis allowed me to really retire after laying off from the great recession. I look only at EMA, slow stochastics, and MACD; and I have made money every year since started in 2009. Technical analysis works, you just have to know how to use it!! استراتيجيات أفضل 5: نظام التعلم آلة قصيرة الأجل. It’s time for the 5th and final part of the Build Better Strategies series. In part 3 we’ve discussed the development process of a model-based system, and consequently we’ll conclude the series with developing a data-mining system. كانت مبادئ استخراج البيانات والتعلم الآلي موضوع الجزء 4. بالنسبة لنا على سبيل المثال التداول على المدى القصير ونحن سوف تستخدم خوارزمية التعلم العميق، أوتوينكودر مكدسة، لكنها سوف تعمل في نفس الطريق مع العديد من آلة أخرى خوارزميات التعلم. مع أدوات البرمجيات اليوم، لا يحتاج سوى 20 سطر من التعليمات البرمجية لاستراتيجية التعلم الآلي. I’ll try to explain all steps in detail. Our example will be a research project – a machine learning experiment for answering two questions. Does a more complex algorithm – such as, more neurons and deeper learning – produce a better prediction? And are short-term price moves predictable by short-term price history? The last question came up due to my scepticism about price action trading in the previous part of this series. I got several s asking about the “trading system generators” or similar price action tools that are praised on some websites. There is no hard evidence that such tools ever produced any profit (except for their vendors) – but does this mean that they all are garbage? سنرى & # 8217. Our experiment is simple: We collect information from the last candles of a price curve, feed it in a deep learning neural net, and use it to predict the next candles. My hypothesis is that a few candles don’t contain any useful predictive information. Of course, a nonpredictive outcome of the experiment won’t mean that I’m right, since I could have used wrong parameters or prepared the data badly. But a predictive outcome would be a hint that I’m wrong and price action trading can indeed be profitable. Machine learning strategy development. Step 1: The target variable. To recap the previous part: a supervised learning algorithm is trained with a set of features in order to predict a target variable . So the first thing to determine is what this target variable shall be. A popular target, used in most papers, is the sign of the price return at the next bar. Better suited for prediction, since less susceptible to randomness, is the price difference to a more distant prediction horizon , like 3 bars from now, or same day next week. Like almost anything in trading systems, the prediction horizon is a compromise between the effects of randomness (less bars are worse) and predictability (less bars are better). Sometimes you’re not interested in directly predicting price, but in predicting some other parameter – such as the current leg of a Zigzag indicator – that could otherwise only be determined in hindsight. Or you want to know if a certain market inefficiency will be present in the next time, especially when you’re using machine learning not directly for trading, but for filtering trades in a model-based system. Or you want to predict something entirely different, for instance the probability of a market crash tomorrow. All this is often easier to predict than the popular tomorrow’s return. In our price action experiment we’ll use the return of a short-term price action trade as target variable. Once the target is determined, next step is selecting the features. Step 2: The features. A price curve is the worst case for any machine learning algorithm. Not only does it carry little signal and mostly noise , it is also nonstationary and the signal/noise ratio changes all the time. The exact ratio of signal and noise depends on what is meant with “signal”, but it is normally too low for any known machine learning algorithm to produce anything useful. So we must derive features from the price curve that contain more signal and less noise. Signal, in that context, is any information that can be used to predict the target, whatever it is. All the rest is noise. Thus, selecting the features is critical for success – much more critical than deciding which machine learning algorithm you’re going to use. There are two approaches for selecting features. The first and most common is extracting as much information from the price curve as possible. Since you do not know where the information is hidden, you just generate a wild collection of indicators with a wide range of parameters, and hope that at least a few of them will contain the information that the algorithm needs. This is the approach that you normally find in the literature. The problem of this method: Any machine learning algorithm is easily confused by nonpredictive predictors. So it won’t do to just throw 150 indicators at it. You need some preselection algorithm that determines which of them carry useful information and which can be omitted. Without reducing the features this way to maybe eight or ten, even the deepest learning algorithm won’t produce anything useful. The other approach, normally for experiments and research, is using only limited information from the price curve. This is the case here: Since we want to examine price action trading, we only use the last few prices as inputs, and must discard all the rest of the curve. This has the advantage that we don’t need any preselection algorithm since the number of features is limited anyway. Here are the two simple predictor functions that we use in our experiment (in C): The two functions are supposed to carry the necessary information for price action: per-bar movement and volatility. The change function is the difference of the current price to the price of n bars before, divided by the current price. The range function is the total high-low distance of the last n candles, also in divided by the current price. And the scale function centers and compresses the values to the +/-100 range, so we divide them by 100 for getting them normalized to +/-1 . We remember that normalizing is needed for machine learning algorithms. Step 3: Preselecting/preprocessing predictors. When you have selected a large number of indicators or other signals as features for your algorithm, you must determine which of them is useful and which not. There are many methods for reducing the number of features, for instance: Determine the correlations between the signals. Remove those with a strong correlation to other signals, since they do not contribute to the information. Compare the information content of signals directly, with algorithms like information entropy or decision trees. Determine the information content indirectly by comparing the signals with randomized signals; there are some software libraries for this, such as the R Boruta package. Use an algorithm like Principal Components Analysis (PCA) for generating a new signal set with reduced dimensionality. Use genetic optimization for determining the most important signals just by the most profitable results from the prediction process. Great for curve fitting if you want to publish impressive results in a research paper. For our experiment we do not need to preselect or preprocess the features, but you can find useful information about this in articles (1), (2), and (3) listed at the end of the page. Step 4: Select the machine learning algorithm. R offers many different ML packages, and any of them offers many different algorithms with many different parameters. Even if you already decided about the method – here, deep learning – you have still the choice among different approaches and different R packages. Most are quite new, and you can find not many empirical information that helps your decision. You have to try them all and gain experience with different methods. For our experiment we’ve choosen the Deepnet package, which is probably the simplest and easiest to use deep learning library. This keeps our code short. We’re using its Stacked Autoencoder ( SAE ) algorithm for pre-training the network. Deepnet also offers a Restricted Boltzmann Machine ( RBM ) for pre-training, but I could not get good results from it. There are other and more complex deep learning packages for R, so you can spend a lot of time checking out all of them. How pre-training works is easily explained, but why it works is a different matter. As to my knowledge, no one has yet come up with a solid mathematical proof that it works at all. Anyway, imagine a large neural net with many hidden layers: Training the net means setting up the connection weights between the neurons. The usual method is error backpropagation. But it turns out that the more hidden layers you have, the worse it works. The backpropagated error terms get smaller and smaller from layer to layer, causing the first layers of the net to learn almost nothing. Which means that the predicted result becomes more and more dependent of the random initial state of the weights. This severely limited the complexity of layer-based neural nets and therefore the tasks that they can solve. At least until 10 years ago. In 2006 scientists in Toronto first published the idea to pre-train the weights with an unsupervised learning algorithm, a restricted Boltzmann machine. This turned out a revolutionary concept. It boosted the development of artificial intelligence and allowed all sorts of new applications from Go-playing machines to self-driving cars. In the case of a stacked autoencoder, it works this way: Select the hidden layer to train; begin with the first hidden layer. Connect its outputs to a temporary output layer that has the same structure as the network’s input layer. Feed the network with the training samples, but without the targets. Train it so that the first hidden layer reproduces the input signal – the features – at its outputs as exactly as possible. The rest of the network is ignored. During training, apply a ‘weight penalty term’ so that as few connection weights as possible are used for reproducing the signal. Now feed the outputs of the trained hidden layer to the inputs of the next untrained hidden layer, and repeat the training process so that the input signal is now reproduced at the outputs of the next layer. Repeat this process until all hidden layers are trained. We have now a ‘sparse network’ with very few layer connections that can reproduce the input signals. Now train the network with backpropagation for learning the target variable, using the pre-trained weights of the hidden layers as a starting point. The hope is that the unsupervised pre-training process produces an internal noise-reduced abstraction of the input signals that can then be used for easier learning the target. And this indeed appears to work. No one really knows why, but several theories – see paper (4) below – try to explain that phenomenon. Step 5: Generate a test data set. We first need to produce a data set with features and targets so that we can test our prediction process and try out parameters. The features must be based on the same price data as in live trading, and for the target we must simulate a short-term trade. So it makes sense to generate the data not with R, but with our trading platform, which is anyway a lot faster. Here’s a small Zorro script for this, DeepSignals.c : We’re generating 2 years of data with features calculated by our above defined change and range functions. Our target is the result of a trade with 3 bars life time. Trading costs are set to zero, so in this case the result is equivalent to the sign of the price difference at 3 bars in the future. The adviseLong function is described in the Zorro manual; it is a mighty function that automatically handles training and predicting and allows to use any R-based machine learning algorithm just as if it were a simple indicator. In our code, the function uses the next trade return as target, and the price changes and ranges of the last 4 bars as features. The SIGNALS flag tells it not to train the data, but to export it to a .csv file. The BALANCED flag makes sure that we get as many positive as negative returns; this is important for most machine learning algorithms. Run the script in [Train] mode with our usual test asset EUR/USD selected. It generates a spreadsheet file named DeepSignalsEURUSD_L.csv that contains the features in the first 8 columns, and the trade return in the last column. Step 6: Calibrate the algorithm. Complex machine learning algorithms have many parameters to adjust. Some of them offer great opportunities to curve-fit the algorithm for publications. Still, we must calibrate parameters since the algorithm rarely works well with its default settings. For this, here’s an R script that reads the previously created data set and processes it with the deep learning algorithm ( DeepSignal.r ): We’ve defined three functions neural.train , neural.predict , and neural.init for training, predicting, and initializing the neural net. The function names are not arbitrary, but follow the convention used by Zorro’s advise(NEURAL. ) function. It doesn’t matter now, but will matter later when we use the same R script for training and trading the deep learning strategy. A fourth function, TestOOS , is used for out-of-sample testing our setup. The function neural.init seeds the R random generator with a fixed value (365 is my personal lucky number). Otherwise we would get a slightly different result any time, since the neural net is initialized with random weights. It also creates a global R list named “Models”. Most R variable types don’t need to be created beforehand, some do (don’t ask me why). The ‘<<-‘ operator is for accessing a global variable from within a function. The function neural.train takes as input a model number and the data set to be trained. The model number identifies the trained model in the “ Models ” قائمة. A list is not really needed for this test, but we’ll need it for more complex strategies that train more than one model. The matrix containing the features and target is passed to the function as second parameter. If the XY data is not a proper matrix, which frequently happens in R depending on how you generated it, it is converted to one. Then it is split into the features ( X ) and the target ( Y ), and finally the target is converted to 1 for a positive trade outcome and 0 for a negative outcome. The network parameters are then set up. Some are obvious, others are free to play around with: The network structure is given by the hidden vector: c(50,100,50) defines 3 hidden layers, the first with 50, second with 100, and third with 50 neurons. That’s the parameter that we’ll later modify for determining whether deeper is better. The activation function converts the sum of neuron input values to the neuron output; most often used are sigmoid that saturates to 0 or 1, or tanh that saturates to -1 or +1. We use tanh here since our signals are also in the +/-1 range. The output of the network is a sigmoid function since we want a prediction in the 0..1 range. But the SAE output must be “linear” so that the Stacked Autoencoder can reproduce the analog input signals on the outputs. The learning rate controls the step size for the gradient descent in training; a lower rate means finer steps and possibly more precise prediction, but longer training time. Momentum adds a fraction of the previous step to the current one. It prevents the gradient descent from getting stuck at a tiny local minimum or saddle point. The learning rate scale is a multiplication factor for changing the learning rate after each iteration (I am not sure for what this is good, but there may be tasks where a lower learning rate on higher epochs improves the training). An epoch is a training iteration over the entire data set. Training will stop once the number of epochs is reached. More epochs mean better prediction, but longer training. The batch size is a number of random samples – a mini batch – taken out of the data set for a single training run. Splitting the data into mini batches speeds up training since the weight gradient is then calculated from fewer samples. The higher the batch size, the better is the training, but the more time it will take. The dropout is a number of randomly selected neurons that are disabled during a mini batch. This way the net learns only with a part of its neurons. This seems a strange idea, but can effectively reduce overfitting. All these parameters are common for neural networks. Play around with them and check their effect on the result and the training time. Properly calibrating a neural net is not trivial and might be the topic of another article. The parameters are stored in the model together with the matrix of trained connection weights. So they need not to be given again in the prediction function, neural.predict . It takes the model and a vector X of features, runs it through the layers, and returns the network output, the predicted target Y . Compared with training, prediction is pretty fast since it only needs a couple thousand multiplications. If X was a row vector, it is transposed and this way converted to a column vector, otherwise the nn.predict function won’t accept it. Use RStudio or some similar environment for conveniently working with R. Edit the path to the .csv data in the file above, source it, install the required R packages (deepnet, e1071, and caret), then call the TestOOS function from the command line. If everything works, it should print something like that: TestOOS reads first our data set from Zorro’s Data folder. It splits the data in 80% for training ( XY.tr ) and 20% for out-of-sample testing ( XY.ts ). The training set is trained and the result stored in the Models list at index 1. The test set is further split in features ( X ) and targets ( Y ). Y is converted to binary 0 or 1 and stored in Y.ob , our vector of observed targets. We then predict the targets from the test set, convert them again to binary 0 or 1 and store them in Y.pr . For comparing the observation with the prediction, we use the confusionMatrix function from the caret package. A confusion matrix of a binary classifier is simply a 2×2 matrix that tells how many 0’s and how many 1’s had been predicted wrongly and correctly. A lot of metrics are derived from the matrix and printed in the lines above. The most important at the moment is the 62% prediction accuracy . This may hint that I bashed price action trading a little prematurely. But of course the 62% might have been just luck. We’ll see that later when we run a WFO test. A final advice: R packages are occasionally updated, with the possible consequence that previous R code suddenly might work differently, or not at all. This really happens, so test carefully after any update. Step 7: The strategy. Now that we’ve tested our algorithm and got some prediction accuracy above 50% with a test data set, we can finally code our machine learning strategy. In fact we’ve already coded most of it, we just must add a few lines to the above Zorro script that exported the data set. This is the final script for training, testing, and (theoretically) trading the system ( DeepLearn.c ): We’re using a WFO cycle of one year, split in a 90% training and a 10% out-of-sample test period. You might ask why I have earlier used two year’s data and a different split, 80/20, for calibrating the network in step 5. This is for using differently composed data for calibrating and for walk forward testing. If we used exactly the same data, the calibration might overfit it and compromise the test. The selected WFO parameters mean that the system is trained with about 225 days data, followed by a 25 days test or trade period. Thus, in live trading the system would retrain every 25 days, using the prices from the previous 225 days. In the literature you’ll sometimes find the recommendation to retrain a machine learning system after any trade, or at least any day. But this does not make much sense to me. When you used almost 1 year’s data for training a system, it can obviously not deteriorate after a single day. Or if it did, and only produced positive test results with daily retraining, I would strongly suspect that the results are artifacts by some coding mistake. Training a deep network takes really a long time, in our case about 10 minutes for a network with 3 hidden layers and 200 neurons. In live trading this would be done by a second Zorro process that is automatically started by the trading Zorro. In the backtest, the system trains at any WFO cycle. Therefore using multiple cores is recommended for training many cycles in parallel. The NumCores variable at -1 activates all CPU cores but one. Multiple cores are only available in Zorro S, so a complete walk forward test with all WFO cycles can take several hours with the free version. In the script we now train both long and short trades. For this we have to allow hedging in Training mode, since long and short positions are open at the same time. Entering a position is now dependent on the return value from the advise function, which in turn calls either the neural.train or the neural.predict function from the R script. So we’re here entering positions when the neural net predicts a result above 0.5. The R script is now controlled by the Zorro script (for this it must have the same name, NeuralLearn.r , only with different extension). It is identical to our R script above since we’re using the same network parameters. Only one additional function is needed for supporting a WFO test: The neural.save function stores the Models list – it now contains 2 models for long and for short trades – after every training run in Zorro’s Data folder. Since the models are stored for later use, we do not need to train them again for repeated test runs. This is the WFO equity curve generated with the script above (EUR/USD, without trading costs): EUR/USD equity curve with 50-100-50 network structure. Although not all WFO cycles get a positive result, it seems that there is some predictive effect. The curve is equivalent to an annual return of 89%, achieved with a 50-100-50 hidden layer structure. We’ll check in the next step how different network structures affect the result. Since the neural.init , neural.train , neural.predict , and neural.save functions are automatically called by Zorro’s adviseLong/adviseShort functions, there are no R functions directly called in the Zorro script. Thus the script can remain unchanged when using a different machine learning method. Only the DeepLearn.r script must be modified and the neural net, for instance, replaced by a support vector machine. For trading such a machine learning system live on a VPS, make sure that R is also installed on the VPS, the needed R packages are installed, and the path to the R terminal set up in Zorro’s ini file. Otherwise you’ll get an error message when starting the strategy. Step 8: The experiment. If our goal had been developing a strategy, the next steps would be the reality check, risk and money management, and preparing for live trading just as described under model-based strategy development. But for our experiment we’ll now run a series of tests, with the number of neurons per layer increased from 10 to 100 in 3 steps, and 1, 2, or 3 hidden layers (deepnet does not support more than 3). So we’re looking into the following 9 network structures: c(10), c(10,10), c(10,10,10), c(30), c(30,30), c(30,30,30), c(100), c(100,100), c(100,100,100). For this experiment you need an afternoon even with a fast PC and in multiple core mode. Here are the results (SR = Sharpe ratio, R2 = slope linearity): We see that a simple net with only 10 neurons in a single hidden layer won’t work well for short-term prediction. Network complexity clearly improves the performance, however only up to a certain point. A good result for our system is already achieved with 3 layers x 30 neurons. Even more neurons won’t help much and sometimes even produce a worse result. This is no real surprise, since for processing only 8 inputs, 300 neurons can likely not do a better job than 100. استنتاج. Our goal was determining if a few candles can have predictive power and how the results are affected by the complexity of the algorithm. The results seem to suggest that short-term price movements can indeed be predicted sometimes by analyzing the changes and ranges of the last 4 candles. The prediction is not very accurate – it’s in the 58%..60% range, and most systems of the test series become unprofitable when trading costs are included. Still, I have to reconsider my opinion about price action trading. The fact that the prediction improves with network complexity is an especially convincing argument for short-term price predictability. It would be interesting to look into the long-term stability of predictive price patterns. For this we had to run another series of experiments and modify the training period ( WFOPeriod in the script above) and the 90% IS/OOS split. This takes longer time since we must use more historical data. I have done a few tests and found so far that a year seems to be indeed a good training period. The system deteriorates with periods longer than a few years. Predictive price patterns, at least of EUR/USD, have a limited lifetime. Where can we go from here? There’s a plethora of possibilities, for instance: Use inputs from more candles and process them with far bigger networks with thousands of neurons. Use oversampling for expanding the training data. Prediction always improves with more training samples. Compress time series f.i. with spectal analysis and analyze not the candles, but their frequency representation with machine learning methods. Use inputs from many candles – such as, 100 – and pre-process adjacent candles with one-dimensional convolutional network layers. Use recurrent networks. Especially LSTM could be very interesting for analyzing time series – and as to my knowledge, they have been rarely used for financial prediction so far. Use an ensemble of neural networks for prediction, such as Aronson’s “oracles” and “comitees”. Papers / Articles. (3) V.Perervenko, Selection of Variables for Machine Learning. I’ve added the C and R scripts to the 2016 script repository. You need both in Zorro’s Strategy folder. Zorro version 1.474, and R version 3.2.5 (64 bit) was used for the experiment, but it should also work with other versions. 62 thoughts on “Better Strategies 5: A Short-Term Machine Learning System” I’ve tested your strategy using 30min AAPL data but “sae.dnn.train” returns all NaN in training. (It works just decreasing neurons to less than (5,10,5)… but accuracy is 49%) Can you help me to understand why? شكرا مقدما. If you have not changed any SAE parameters, look into the .csv data. It is then the only difference to the EUR/USD test. Maybe something is wrong with it. Another fantastic article, jcl. Zorro is a remarkable environment for these experiments. Thanks for sharing your code and your approach – this really opens up an incredible number of possibilities to anyone willing to invest the time to learn how to use Zorro. The problem with AAPL 30min data was related to the normalizing method I used (X-mean/SD). The features range was not between -1:1 and I assume that sae.dnn need it to work… Anyway performances are not comparable to yours 🙂 I have one question: why do you use Zorro for creating the features in the csv file and then opening it in R? why not create the file with all the features in R in a few lines and do the training on the file when you are already in R? instead of getting inside Zorro and then to R. When you want R to create the features, you must still transmit the price data and the targets from Zorro to R. So you are not gaining much. Creating the features in Zorro results usually in shorter code and faster training. Features in R make only sense when you need some R package for calculating them. Really helpful and interesting article! I would like to know if there are any English version of the book: “Das Börsenhackerbuch: Finanziell unabhängig durch algorithmische Handelssysteme” I am really interested on it, Not yet, but an English version is planned. Thanks JCL! Please let me now when the English version is ready, because I am really interested on it. Works superbly (as always). تشكرات. One small note, if you have the package “dlm” loaded in R, TestOOS will fail with error: “Error in TestOOS() : cannot change value of locked binding for ‘X'”. This is due to there being a function X in the dlm package, so the name is locked when the package is loaded. Easily fixed by either renaming occurrences of the variable X to something else, or temporarily detaching the dlm package with: detach(“package:dlm”, unload=TRUE) Thanks for the info with the dlm package. I admit that ‘X’ is not a particular good name for a variable, but a function named ‘X’ in a distributed package is even a bit worse. Results below were generated by revised version of DeepSignals.r – only change was use of LSTM net from the rnn package on CRAN. The authors of the package regard their LSTM implementation as “experimental” and do not feel it is as yet learning properly, so hopefully more improvement to come there. (Spent ages trying to accomplish the LSTM element using the mxnet package but gave up as couldn’t figure out the correct input format when using multiple training features.) Will post results of full WFO when I have finished LSTM version of DeepLearn.r. Confusion Matrix and Statistics. 95% CI : (0.5699, 0.5956) No Information Rate : 0.5002. P-Value [Acc > NIR] : <2e-16. Mcnemar's Test P-Value : 0.2438. Pos Pred Value : 0.5844. Neg Pred Value : 0.5813. Detection Rate : 0.2862. Detection Prevalence : 0.4897. Balanced Accuracy : 0.5828. Results of WFO test below. Again, only change to original files was the use of LSTM in R, rather than DNN+SAE. Walk-Forward Test DeepLearnLSTMV4 EUR/USD. Simulated account AssetsFix. Bar period 1 hour (avg 87 min) Simulation period 15.05.2014-07.06.2016 (12486 bars) Test period 04.05.2015-07.06.2016 (6649 bars) Lookback period 100 bars (4 days) WFO test cycles 11 x 604 bars (5 weeks) Training cycles 12 x 5439 bars (46 weeks) Monte Carlo cycles 200. Assumed slippage 0.0 sec. Spread 0.0 pips (roll 0.00/0.00) Contracts per lot 1000.0. Gross win/loss 3628$ / -3235$ (+5199p) Average profit 360$/year, 30$/month, 1.38$/day. Max drawdown -134$ 34% (MAE -134$ 34%) Total down time 95% (TAE 95%) Max down time 5 weeks from Aug 2015. Max open margin 40$ Max open risk 35$ Trade volume 5710964$ (5212652$/year) Transaction costs 0.00$ spr, 0.00$ slp, 0.00$ rol. Capital required 262$ Number of trades 6787 (6195/year, 120/week, 25/day) Percent winning 57.6% Max win/loss 16$ / -14$ Avg trade profit 0.06$ 0.8p (+12.3p / -14.8p) Avg trade slippage 0.00$ 0.0p (+0.0p / -0.0p) Avg trade bars 1 (+1 / -2) Max trade bars 3 (3 hours) Time in market 177% Max open trades 3. Max loss streak 17 (uncorrelated 11) Annual return 137% Profit factor 1.12 (PRR 1.08) Sharpe ratio 1.79. Kelly criterion 2.34. R2 coefficient 0.435. Ulcer index 13.3% Prediction error 152% Confidence level AR DDMax Capital. Portfolio analysis OptF ProF Win/Loss Wgt% Cycles. EUR/USD .219 1.12 3907/2880 100.0 XX/\//\X/// EUR/USD:L .302 1.17 1830/1658 65.0 /\/\//\//// EUR/USD:S .145 1.08 2077/1222 35.0 \//\//\\/// مثير للإعجاب! For a still experimental LSTM implementation that result looks not bad. Sorry for being completely off topic but could you please point me to the best place where i can learn to code trend lines?? I’m a complete beginner, but from trading experience i see them as an important part of what i would like to build… Robot Wealth has an algorithmic trading course for that – you can find details on his blog robotwealth/. I think you misunderstand the meaning pretrening. See my articles mql5/ru/articles/1103. I think there is more fully described this stage. I don’t think I misunderstood pretraining, at least not more than everyone else, but thanks for the links! You can paste your LTSM r code please ? Could you help me answering some questions? I have few question below: 1.I want to test Commission mode. If I use interactive broker, I should set Commission = ? in normal case. 2.If I press the “trade” button, I see the log the script will use DeepLearn_EURUSD.ml. So real trade it will use DeepLearn_EURUSD.ml to get the model to trade? And use neural.predict function to trade? 3.If I use the slow computer to train the data , I should move DeepLearn_EURUSD.ml to the trade computer? I test the real trade on my interactive brokers and press the result button. Can I use Commission=0.60 to train the neural and get the real result? Result button will show the message below: Trade Trend EUR/USD. Bar period 2 min (avg 2 min) Trade period 02.11.2016-02.11.2016. Spread 0.5 pips (roll -0.02/0.01) Contracts per lot 1000.0. Commission should be normally not set up in the script, but entered in the broker specific asset list. Otherwise you had to change the script every time when you want to test it with a different broker or account. IB has different lot sizes and commissions, so you need to add the command. to the script when you want to test it for an IB account. Yes, DeepLearn_EURUSD.ml is the model for live trading, and you need to copy it to the trade computer. Do I write assetList(“AssetsIB.csv”) in the right place? So below code’s result includes Commission ? I test the result with Commission that seems pretty good. Annual +93% +3177p. BarPeriod = 60; // 1 hour. WFOPeriod = 252*24; // 1 year. NumCores = -1; // use all CPU cores but one. Spread = RollLong = RollShort = Commission = Slippage = 0; if(Train) Hedge = 2; I run the DeepLearn.c in the IB paper trade. The code “LifeTime = 3; // prediction horizon” seems to close the position that you open after 3 bars(3 hours). But I can’t see it close the position on third bar close. I see the logs below: Closing prohibited – check NFA flag! [EUR/USD::L4202] Can’t close 11.10995 at 09:10:51. In my IB paper trade, it the default order size is 1k on EUR/USD. How to change the order size in paper trade? شكرا جزيلا. IB is an NFA compliant broker. You can not close trades on NFA accounts. You must set the NFA flag for opening a reverse position instead. And you must enable trading costs, otherwise including the commission has no effect. I don’t think that you get a positive result with trading costs. Those account issues are not related to machine learning, and are better asked on the Zorro forum. Or even better, read the Zorro manual where all this is explained. Just search for “NFA”. I do some experiment to change the neural’s parameter with commission. The code is below: BarPeriod = 60; // 1 hour. WFOPeriod = 252*24; // 1 year. NumCores = -1; // use all CPU cores but one. Spread = RollLong = RollShort = Slippage = 0; if(Train) Hedge = 2; I get the result with commission that Annual Return is about +23%. But I don’t complete understand the zorro’s setting and zorro’s report. Walk-Forward Test DeepLearn EUR/USD. Simulated account AssetsIB.csv. Bar period 1 hour (avg 86 min) Simulation period 15.05.2014-09.09.2016 (14075 bars) Test period 23.04.2015-09.09.2016 (8404 bars) Lookback period 100 bars (4 days) WFO test cycles 14 x 600 bars (5 weeks) Training cycles 15 x 5401 bars (46 weeks) Monte Carlo cycles 200. Simulation mode Realistic (slippage 0.0 sec) Spread 0.0 pips (roll 0.00/0.00) Contracts per lot 20000.0. Gross win/loss 24331$ / -22685$ (+914p) Average profit 1190$/year, 99$/month, 4.58$/day. Max drawdown -1871$ 114% (MAE -1912$ 116%) Total down time 92% (TAE 41%) Max down time 18 weeks from Dec 2015. Max open margin 2483$ Max open risk 836$ Trade volume 26162350$ (18916130$/year) Transaction costs 0.00$ spr, 0.00$ slp, 0.00$ rol, -1306$ com. Capital required 5239$ Number of trades 1306 (945/year, 19/week, 4/day) Percent winning 52.5% Max win/loss 375$ / -535$ Avg trade profit 1.26$ 0.7p (+19.7p / -20.3p) Avg trade slippage 0.00$ 0.0p (+0.0p / -0.0p) Avg trade bars 2 (+2 / -3) Max trade bars 3 (3 hours) Time in market 46% Max open trades 3. Max loss streak 19 (uncorrelated 10) Annual return 23% Profit factor 1.07 (PRR 0.99) Sharpe ratio 0.56. Kelly criterion 1.39. R2 coefficient 0.000. Ulcer index 20.8% Confidence level AR DDMax Capital. 10% 29% 1134$ 4153$ 20% 27% 1320$ 4427$ 30% 26% 1476$ 4656$ 40% 24% 1649$ 4911$ 50% 23% 1767$ 5085$ 60% 22% 1914$ 5301$ 70% 21% 2245$ 5789$ 80% 19% 2535$ 6216$ 90% 16% 3341$ 7403$ 95% 15% 3690$ 7917$ 100% 12% 4850$ 9625$ Portfolio analysis OptF ProF Win/Loss Wgt% Cycles. EUR/USD .256 1.07 685/621 100.0 /X/XXXXXXXXXXX. The manual is your friend: Great read…I built this framework to use XGB to analyze live ETF price movements. Let me know what you think: Hi, deep learning researcher and programmer here. 🙂 Great blog and great article, congratulations! I have some comments: & # 8211؛ if you use ReLUs as activation functions, pretraining is not necessary. & # 8211؛ AE is genarraly referred to as networks with same input and output, I would call the proposed network rather a MLP (multi-layer perceptron). Do you think it is possible to use Python (like TensorFlow) or LUA (like Torch7) based deep learing libraries with Zorro? I have also heard that ReLUs make a network so fast that you can brute force train it in some cases, with no pretraining. But I have not yet experimented with that. The described network is commonly called ‘SAE’ since it uses autoencoders, with indeed the same number of inputs and outputs, for the pre-training process. & # 8211؛ I am not familiar with Torch7, but you can theoretically use Tensorflow with Zorro with a DLL based interface. The network structure must still be defined in Python, but Zorro can use the network for training and prediction. Would you do YouTube Tutorials to your work, this series of articles. And where can I subscribe this kinda of algorithmic trading tutorials. Thanks for your contribution. I would do YouTube tutorials if someone payed me very well for them. Until then, you can subscribe this blog with the link on the right above. Why not feed economic data from a calendar like forexfactory into the net as well? I suggested that several times before. This data is what makes me a profitable manual trader (rookie though), if there is any intelligence in these neuronal networks it should improve performance greatly. input must be name (non farm payrolls for example or some unique identifier) , time left to release, predicted value (like 3-5 days before) last value and revision. Some human institutional traders claim its possible to trade profitably without a chart from this data alone. Detecting static support and resistance areas (horizontal lines) should be superior to any simple candle patterns. It can be mathematically modeled, as the Support and Resistance indicator from Point Zero Trading proves. Unfortunately i dont have a clue how Arturo the programmer did it. I imagine an artificial intelligence actually “seeing” what the market is focussed on (like speculation on a better than expected NFP report based on other positive Data in the days before, driving the dollar up into the report). “seeing” significant support and resistance levels should allow for trading risk, making reasonable decisions on where to place SL and TP. We also made the experience that well chosen external data, not derived from the price curve, can improve the prediction. There is even a trading system based on Trump’s twitter outpourings. I can’t comment on support and resistance since I know no successful systems that use them, and am not sure that they exist at all. thank you very much for everything that you did so far. I read the book (German here, too) and am working through your blog articles right now. I already learnt a lot and still am learning more and more about the really important stuff (other than: Your mindset must be perfect and you need to have well-defined goals. I never was a fan of such things and finally I found someone that is on the same opinion and actually teaches people how to correctly do it). So, thank you very much and thanks in advance for all upcoming articles that I will read and you will post. As a thank you I was thinking about sending you a corrected version of your book (there are some typos and wrong articles here and there…). Would you be interested in that? Again thank you for everything and please keep up the good work. شكر! And I’m certainly interested in a list of all my mistakes. Thank you for this interesting post. I ran it on my pc and obtained similar results as yours. Then I wanted to see if it could perform as well when commission and rollover and slippage were included during test. I used the same figures as the ones used in the workshops and included in the AssetFix.csv file. The modifications I did in your DeepLearn.c file are as follows: Spread = RollLong = RollShort = Commission = Slippage = 0; The results then were not as optimistic as without commission: Walk-Forward Test DeepLearn_realistic EUR/USD. Simulated account AssetsFix. Bar period 1 hour (avg 86 min) Simulation period 09.05.2014-27.01.2017 (16460 bars) Test period 22.04.2015-27.01.2017 (10736 bars) Lookback period 100 bars (4 days) WFO test cycles 18 x 596 bars (5 weeks) Training cycles 19 x 5367 bars (46 weeks) Monte Carlo cycles 200. Simulation mode Realistic (slippage 5.0 sec) Spread 0.5 pips (roll -0.02/0.01) Contracts per lot 1000.0. Gross win/loss 5608$ / -6161$ (-6347p) Average profit -312$/year, -26$/month, -1.20$/day. Max drawdown -635$ -115% (MAE -636$ -115%) Total down time 99% (TAE 99%) Max down time 85 weeks from Jun 2015. Max open margin 40$ Max open risk 41$ Trade volume 10202591$ (5760396$/year) Transaction costs -462$ spr, 46$ slp, -0.16$ rol, -636$ com. Capital required 867$ Number of trades 10606 (5989/year, 116/week, 24/day) Percent winning 54.9% Max win/loss 18$ / -26$ Avg trade profit -0.05$ -0.6p (+11.1p / -14.8p) Avg trade slippage 0.00$ 0.0p (+1.5p / -1.7p) Avg trade bars 1 (+1 / -2) Max trade bars 3 (3 hours) Time in market 188% Max open trades 3. Max loss streak 19 (uncorrelated 12) Annual return -36% Profit factor 0.91 (PRR 0.89) Sharpe ratio -1.39. Kelly criterion -5.39. R2 coefficient 0.737. Ulcer index 100.0% Confidence level AR DDMax Capital. Portfolio analysis OptF ProF Win/Loss Wgt% Cycles. EUR/USD .000 0.91 5820/4786 100.0 XX/\XX\X\X/X/\\X\\ I am a very beginner with Zorro, maybe I did a mistake ? What do you think ? No, your results look absolutely ok. The predictive power of 4 candles is very weak. This is just an experiment for finding out if price action has any predictive power at all. Although it apparently has, I have not yet seen a really profitable system with this method. From the machine learning systems that we’ve programmed so far, all that turned out profitable used data from a longer price history. Thank you for the great article, it’s exactly what I needed in order to start experimenting with ML in Zorro. I’ve noticed that the results are slightly different each time despite using the random seed. Here it doesn’t matter thanks to the large number of trades but for example with daily bars the performance metrics fluctuate much more. My question is: do you happen to know from where does the randomness come? Is it still the training process in R despite the seed? It is indeed so. Deepnet apparently uses also an internal function, not only the R random function, for randomizing some initial value. any idea about how to use machine learning like in this example with indicators? you could do as better strategy 6. would be very interesting. Is it grid search inside the neural.train function allowed? I get error when I try it. Besides Andy, how did you end up definining the LSTM structure using rnn? Is it not clear for me after reading inside the package. أين هو رمز كامل؟ (أو أين هو مستودع؟) You said” Use genetic optimization for determining the most important signals just by the most profitable results from the prediction process. Great for curve fitting” How about after using genetic optimization process for determining the most profitable signals , match and measure the most profitable signals with distance metrics/similarity analysis(mutual information,DTW,frechet distance algorithm etc…) then use the distance metrics/similarity analysis as function for neural network prediction? هل هذا منطقي ؟ Distance to what? To each other? Yes find similar profitable signal-patterns in history and find distance between patterns/profitable signals then predict the behavior of the profitable signal in the future from past patterns. Was wondering about this point you made in Step 5: “Our target is the return of a trade with 3 bars life time.” But in the code, doesn’t. mean that we are actually predicting the SIGN of the return, rather than the return itself? نعم فعلا. Only the binary win/loss result, but not the magnitude of the win or loss is used for the prediction. “When you used almost 1 year’s data for training a system, it can obviously not deteriorate after a single day. Or if it did, and only produced positive test results with daily retraining, I would strongly suspect that the results are artifacts by some coding mistake.” There is an additional trap to be aware of related to jcl’s comment above that applies to supervised machine learning techniques (where you train a model against actual outcomes). Assume you are trying to predict the return three bars ahead (as in the example above – LifeTime = 3;). In real time you obviously don’t have access to the outcomes for one, two and three bars ahead with which to retrain your model, but when using historical data you do. With frequently retrained models (especially if using relatively short blocks of training data) it is easy to train a model offline (and get impressive results) with data you will not have available for training in real time. Then reality kicks in. Therefore truncating your offline training set by N bars (where N is the number of bars ahead you are trying to predict) may well be advisable… Amazing work, could you please share the WFO code as well. I was able to run the code till neural.save but unable to generate the WFO results. شكرا جزيلا. The code above does use WFO. Dear jcl, in the text you mentioned that you could predict the current leg of zig-zag indicator, could you please elaborate on how to do that? what features and responses would you reccomend? I would never claim that I could predict the current leg of zigzag indicator. But we have indeed coded a few systems that attempted that. For this, simply use not the current price movement, but the current zigzag slope as a training target. Which parameters you use for the features is completely up to you. عمل جيد. I was wondering if you ever tried using something like a net long-short ratio of the asset (I.e. the FXCM SSI index – real time live data) as a feature to improve prediction? Not with the FXCM SSI index, since it is not available as historical data as far as I know. But similar data of other markets, such as order book content, COT report or the like, have been used as features to a machine learning system. I see, thanks, and whats’s the experience on those? do they have any predictive power? if you know any materials on this, I would be very interested to read it. (fyi, the SSI index can be exported from FXCM Trading Station (daily data from 2003 for most currency pairs) Thanks for the info with the SSI. Yes, additional market data can have predictive power, especially from the order book. But since we gathered this experience with contract work for clients, I’m not at liberty to disclose details. However we plan an own study with ML evaluation of additional data, and that might result in an article on this blog. Thanks jcl, looking forward to it! there is a way to record SSI ratios in a CSV file from a LUA Strategy script (FXCM’s scripting language) for live evaluation. happy to give you some details if you decide to evaluate this. (drop me an ) MyFxbook also has a similar indicator, but no historical data on that one unfortunately.
ستانبيك البنك أوغاندا أسعار الفوركس اليوم
خيارات الأسهم مونيكونترول