Brain-computer interfaces (BCIs) provide an alternative pathway of communication between humans and external devices. There are three major paradigms that are commonly employed for BCI: Motor-imagery (MI), event-related potential (ERP), and steady-state visually evoked potential (SSVEP). Three individual paradigms each have their own pros and cons in terms of the available number of classes, eye/mental fatigue, illiteracy rate, and etc. When designing BCI applications a whole range of factors need to be taken into account, such as the patient's psychological and physical condition, environmental limitations, necessary number of classes and required accuracy level, among others. Given the limitations of the individual paradigms, it may not always be possible to satisfy all requirements with a current unimodal paradigm. In this study, we propose the concept of a paradigm-independent BCI framework, in which all three paradigms are available at the same time and can be used interchangeably. To do this, task-related features from the individual paradigms were extracted and cross-validated. Average classification accuracy for three-paradigm decoding was 74.84%(±8.49) across 49 subjects. By considering the three major BCI paradigms, namely MI, ERP and SSVEP and by creating a machine learning framework, which is able to successfully decode the paradigm from individual trials, we pave the way towards new and more complex applications, where the limitations of unimodal BCI paradigms are alleviated.