9512.net
甜梦文库
当前位置:首页 >> >>

Design and develop VoIP software on embedded processor


Business process mining: An industrial application Original Research Article Information Systems, Volume 32, Issue 5, July 2007, Pages 713-732 W.M.P. van der Aalst, H.A. Reijers, A.J.M.M. Weijters, B.F. van Dongen, A.K. Alves de Medeiros, M. Song, H.M.W. Verbeek Purchase
Show preview | Related articles | Related reference work articles

$ 31.50

91

networks with Markov-modulated queues minimizing the state space explosion problem Original Research Article Journal of Computational and Applied Mathematics, Volume 223,Purchase $ 31.50 Issue 1, 1 January 2009, Pages 519-533 Orhan Gemikonakli, Enver Ever, Altan Kocyigit
Approximate solution for two stage open
Close preview | Related articles | Related reference work articles

Abstract | Figures/Tables | References

Abstract

Analytical solutions for two-dimensional Markov processes suffer from the state space explosion problem. Two stage tandem networks are effectively used for analytical modelling of various communication and computer systems which have tandem system behaviour. Performance evaluation of tandem systems with feedbacks can be handled with these models. However, because of the numerical difficulties caused by large state spaces, considering server failures and repairs at the second stage employing multiple servers has not been possible. The solution proposed in this paper is approximate with a high degree of accuracy. Using this approach, two stage open networks with multiple servers, break downs, and repairs at the second stage as well as feedback can be modelled as three-dimensional Markov processes and solved for performability measures. Results show that, unlike other approaches such as spectral expansion, the steady state solution is possible regardless of the number of servers employed.

Article Outline

1. Introduction 2. Two stage tandem networks under study 3. The three-dimensional solution approach 4. The accuracy of the three-dimensional solution approach 5. The efficacy of the proposed method 6. Conclusion References

92

Human factors evaluations of Free Flight: Issues solved and issues

Original Research Article Applied Ergonomics, Volume 38, Issue 4, July 2007, Pages 437-455 Rob C.J. Ruigrok, Jacco M. Hoekstra Purchase
remaining
Close preview | Related articles | Related reference work articles

$ 31.50

Abstract | Figures/Tables | References

Abstract

The Dutch National Aerospace Laboratory (NLR) has conducted extensive human-in-the-loop simulation experiments in NLR's Research Flight Simulator (RFS), focussed on human factors evaluation of Free Flight. Eight years of research, in co-operation with partners in the United States and Europe, has shown that Free Flight has the potential to increase airspace capacity by at least a factor of 3. Expected traffic loads and conflict rates for the year 2020 appear to be no major problem for professional airline crews participating in flight simulation experiments. Flight efficiency is significantly improved by

user-preferred routings, including cruise climbs, while pilot workload is only slightly increased compared to today's reference.

Detailed results from three projects and six human-in-the-loop experiments in NLR's Research Flight Simulator are reported. The main focus of these results is on human factors issues and particularly workload, measured both subjectively and objectively. An extensive discussion is included on many human factors

issues resolved during the experiments, but also open issues are identified.

An intent-based Conflict Detection and Resolution (CD&R) system provides ―benefits‖ in terms of reduced pilot workload, but also ―costs‖ in terms of complexity, need for priority rules, potential compatibility problems between different brands of Flight Management Systems and large bandwidth. Moreover, the intent-based system is not effective at solving multi-aircraft conflicts. A state-based CD&R system also provides ―benefits‖ and ―costs‖. Benefits compared to the full intent-based system are simplicity, low bandwidth requirements, easy to retrofit (no requirements to change avionics infrastructure) and the ability to solve multi-aircraft conflicts in parallel. The ―costs‖ involve a somewhat higher pilot workload in similar circumstances, the smaller look-ahead time which results in less efficient resolution manoeuvres and the sometimes false/nuisance alerts due to missing intent information.

The optimal CD&R system (in terms of costs versus benefits) has been suggested to be state-based CD&R with the addition of intended or target flight level. This combination of state-based CD&R with a limited amount of intent provides ―the best of both worlds‖. Studying this CD&R system is still an open issue.

Article Outline

1. Introduction 1.1. Today's problems 1.2. How things evolved 1.3. Definition and expectation 2. Studies performed 2.1. Pilot workload measurements in studies 2.1.1. Pilot subjective workload, acceptability and safety

2.1.2. Pilot semi-objective workload 2.1.3. Pilot objective workload 2.2. Validation platform 2.3. NLR/NASA Free Flight 2.3.1. Airborne Separation Assurance System 2.3.2. Experiments 2.3.3. Results 2.4. INTENT 2.4.1. ASAS 2.4.2. Experiments 2.4.3. Results 2.5. Mediterranean Free Flight (MFF) 2.5.1. ASAS 2.5.2. Experiments 2.5.3. Results 3. Discussion: issues solved and issues remaining References

93

An efficient parallel optimization algorithm for the Token Bucket control

Original Research Article Computer Communications, Volume 29, Issue 12, 4 August 2006, Pages 2281-2293 Purchase $ 41.95 N.U. Ahmed, X. Lu, L.O. Barbosa
mechanism
Close preview | Related articles | Related reference work articles

Abstract | Figures/Tables | References

Abstract

The Token Bucket algorithm, one of the most widely used control mechanism in computer communication network, has been extensively used to ensure the QoS needs for various applications. Recently, [N.U. Ahmed, Bo Li, Luis

Orozco-Barbosa, Modelling and optimization of computer network traffic

controllers, Mathematical Problems in Engineering, 2005, 6(2005), 617 –640] an optimization technique, based on dynamic programming and genetic algorithm, has been developed which improves network utilization or throughput by reducing data losses and service time, etc. This, however, requires long execution time and excessive memory space thereby imposing limitation on its applicability to high dimensional problems. In this study we have conserved both space and time complexity. This is achieved by introducing multiple processors and the Reduced Memory Algorithm thereby opening up the prospects of solving large-scale problems. Our parallel processing algorithm is tested with MPEG-4 traces. Our numerical results show that the algorithm can effectively solve the multiple Token Bucket problems. The results also provide us with the guidelines to configure the parallel processing platform.

Article Outline

1. Introduction 1.1. Summary of contributions of this paper 1.2. Real time implementation 2. Review 3. A dynamic model of the Token Bucket algorithm 4. Computational complexity 5. A multiple Token Bucket optimization algorithm 5.1. The reduced-memory Dynamic Programming algorithm 5.2. Parallel processing 5.2.1. Parallel Reduced Memory Algorithm 5.3. Algorithm flowcharts 5.4. System analysis 5.4.1. Optimal number of PEs 5.4.2. Communication cost

6. System platform and parameters 6.1. Basic system parameters 6.2. Weights assigned to different losses 7. Numerical results and analysis 7.1. Running time and speedup 7.2. Parallel vs. sequential algorithms 7.2.1. Running time analysis 7.2.2. Memory requirements analysis 7.3. Optimal number of processors 8. Conclusion References

94

design of square free-standing communication towers Original Research Article Journal of Constructional Steel Research, Volume 58, Issue 3, March 2002, Pages 413-425 Purchase $ 41.95 Nabeel Abdulrazzaq Jasim, Alaa Chaseb Galeb
Optimum
Close preview | Related articles | Related reference work articles

Abstract | Figures/Tables | References

Abstract

Free-standing communication towers with a square cross-section, subjected to multiple combinations of wind and dead loads, are optimally designed for least weight. The member areas and joint coordinates are treated as design variables of the tower to which a reasonable initial geometry is specified. Members are designed to satisfy stress, displacement and buckling limits, in addition to linking constraints between area variables in each panel of the tower. The fully stressed design is assumed to be optimal. Coordinate variables are linked to reduce the number of independent design variables. The design problem is, in effect, divided into two separate, but dependent, design spaces: one for member sizes and the other for joint coordinates. On changing coordinate variables, using the

Hooke and Jevees pattern search algorithm, the member areas are treated as dependent design variables. It is found that the decomposition of the optimization problem into a multilevel problem simplifies the procedure for finding the optimum design of free-standing towers.

Article Outline

1. Introduction 2. Formulation of the optimization problem 3. Optimization of tower geometry 4. Case studies 4.1. Case 1 4.2. Case 2 5. Discussion 5.1. Optimum design formulation of free-standing towers 5.2. Effect of the independent design variables 6. Conclusions Appendix A. The equal angle sections used to design the tower References

95

Design of a multi-dimensional query expression for document warehouses Original Research Article Information Sciences, Volume 174, Issues 1-2, 28 June 2005, Pages 55-79 Purchase $ 37.95 Frank S.C. Tseng
Close preview | Related articles | Related reference work articles

Abstract | Figures/Tables | References

Abstract

During the past decade, data warehousing has been widely adopted in the business community. It provides multi-dimensional analyses on cumulated historical business data for helping contemporary administrative

decision-makings. However, many data warehousing query language in present only provides on-line analytical processing (OLAP) for numeric data. For example, MDX (Multi-Dimensional eXpressions) has been proposed as a query language to allow describing multi-dimensional queries over databases with OLAP capabilities. Nevertheless, it is believed there is only about 20% information can be extracted from data warehouses concerning numeric data only, the other 80% information is hidden in non-numeric data or even in documents. Therefore, many researchers now advocate it is time to conduct research works on document warehousing to capture complete business intelligence. Document warehouses, unlike traditional document management systems, include extensive semantic information about documents,

cross-document feature relations, and document grouping or clustering to provide a more accurate and more efficient access to text-oriented business intelligence. In this paper, we extend the structure of MDX into a new one containing complete constructs for querying document warehouses. The traditional MDX only contains SELECT, FROM, and WHERE clauses, which is not rich enough for document warehousing. In this paper, we present how to extend the language constructs to include GROUP BY, HAVING, and ORDER BY to design an SQL-like query language for document warehousing. The work is essential for establishing an infrastructure to help combining text processing with numeric OLAP processing technologies. Hopefully, the combination of data warehousing and document warehousing will be one of the most important kernels of knowledge management and customer relationship management applications.

Article Outline

1. Introduction 2. An introduction to document warehousing

3. Basic concepts in multi-dimensional expressions 3.1. Members 3.2. Tuples 3.3. Sets 3.4. Axes 3.5. Basic constructs in MDX queries 3.5.1. The SELECT clause 3.5.2. The FROM clause 3.5.3. The WHERE clause 4. Basic constructs of multi-dimensional document expressions 4.1. The GROUP BY clause 4.2. The HAVING clause 4.3. The ORDER BY clause 5. Putting it all together 6. Conclusion and future directions 6.1. Conclusion 6.2. Future works References

96

Formal ontology for natural language processing and the integration of

Original Research Article International Journal of Medical Informatics, Volume 75, Issues 3-4, March-April 2006, Pages 224-231 Purchase Jonathan Simon, Mariana Dos Santos, James Fielding, Barry Smith $ 31.50
biomedical databases
Close preview | Related articles | Related reference work articles

Abstract | References

Summary

The central hypothesis underlying this communication is that the methodology and conceptual rigor of a philosophically inspired formal ontology can bring significant benefits in the development and maintenance of application

ontologies [A. Flett, M. Dos Santos, W. Ceusters, Some Ontology Engineering Procedures and their Supporting Technologies, EKAW2002, 2003]. This hypothesis has been tested in the collaboration between Language and Computing (L&C), a company specializing in software for supporting natural language processing especially in the medical field, and the Institute for Formal Ontology and Medical Information Science (IFOMIS), an academic research institution concerned with the theoretical foundations of ontology. In the course of this collaboration L&C's ontology, LinKBase , which is designed to integrate and support reasoning across a plurality of external databases, has been subjected to a thorough auditing on the basis of the principles underlying IFOMIS's Basic Formal Ontology (BFO) [B. Smith, The Basic is Formal to Ontology, a 2002. large
?

http://ontology.buffalo.edu/bfo].

goal

transform

terminology-based ontology into one with the ability to support reasoning applications. Our general procedure has been the implementation of a meta-ontological definition space in which the definitions of all the concepts and relations in LinKBase are standardized in the framework of first-order logic. In this paper we describe how this principles-based standardization has led to a greater degree of internal coherence of the LinKBase structure, and how it has facilitated the construction of mappings between external databases using LinKBase as translation hub. We argue that the collaboration here described represents a new phase in the quest to solve the so-called ―Tower of Babel‖ problem of ontology integration [F. Montayne, J. Flanagan, Formal Ontology: The Foundation for Natural Language Processing, 2003.
? ? ?

http://www.landcglobal.com/].

Article Outline
?

1. Introduction: L&C's LinKBase and Basic Formal Ontology 2. Methods: term definition standardization

2.1. The purposes of standardization 2.2. The specification of the standardization of LinKBase terms 3. Results: part 1 3.1. Objects and processes in LinKBase 3.2. Absences in LinKBase 4. Results: part 2 4.1. SNOMED and the ―parthood‖ relation 4.2. Endurants and perdurants within GO 4.3. Mapping ontological elements: applying external consistency 4.4. Mapping SNOMED to LinKBase 4.5. Mapping GO to LinKBase 5. Discussion Acknowledgements References
? ? ? ? ? ? ?

97

Optimal on–off control of refrigerated transport systems

Original

Research Article Control Engineering Practice, Volume 18, Issue 12, December 2010, Pages 1406-1417 Purchase Bin Li, Richard Otten, Vikas Chandan, William F. Mohs, Jeff Berge,$ 31.50 Andrew G. Alleyne
Close preview | Related articles | Related reference work articles

Abstract | Figures/Tables | References

Abstract
This paper presents a hysteretic on–off control scheme with optimization algorithms for temperature regulation in refrigerated transport systems. A nonlinear dynamic refrigerated transport system model, which consists of a refrigeration unit and a cargo space, is developed. The model serves as an analytical tool for control design. An optimal on–off control strategy is developed based on the time domain analysis of the temperature oscillations. A cost function involving temperature variations from the set-point, energy consumption

and average compressor on–off cycling frequency is introduced for minimization. Choices of weighting values in the cost function give the flexibility of the control approach to meet different requirements. Simulation examples are presented that demonstrate the ability and robustness of the proposed method in achieving tighter temperature regulation and higher system efficiency control in transport refrigeration systems. The simulations are augmented by experimental validation via a novel hardware-in-the-loop load emulation approach.

Article Outline

Nomenclature 1. Introduction 2. System model 2.1. VCC model 2.2. Refrigerated cargo space model 3. On–off control algorithm 3.1. Basic hysteretic on–off control 3.2. Optimal hysteretic on–off control 3.2.1. Time period function 3.2.2. Cost function 3.2.3. Optimization algorithm 4. Simulation examples 4.1. Constant ambient temperature (32 ° C) scenario 4.1.1. Tighter temperature control case 4.1.2. Constrained temperature control case 4.2. Varying ambient temperature scenario 4.3. Robustness analysis 5. Hardware-in-the-loop load emulation experimentation 5.1. Load emulation framework

5.2. Experimental studies 6. Conclusions Acknowledgements Appendix A References

Research highlights

?Nonlinear refrigerated transport system model development. ?Formulation of optimal on–off hysteretic control framework. ?Flexibility in temperature regulation and energy efficienc y control. ?Experimental validation via a novel load emulation approach. Parallel techniques for information extraction from hyperspectral imagery using heterogeneous networks of workstations

98

Original Research

Article Journal of Parallel and Distributed Computing, Volume 68, Issue 1,Purchase $ 41.95 January 2008, Pages 93-111 Antonio J. Plaza
Close preview | Related articles | Related reference work articles

Abstract | References

Abstract

Recent advances in space and computer technologies are revolutionizing the way remotely sensed data is collected, managed and interpreted. In particular, NASA is continuously gathering very high-dimensional imagery data from the surface of the Earth with hyperspectral sensors such as the Jet Propulsion Laboratory's airborne visible-infrared imaging spectrometer (AVIRIS) or the Hyperion imager aboard Earth Observing-1 (EO-1) satellite platform. The development of efficient techniques for extracting scientific understanding from the massive amount of collected data is critical for space-based Earth science and planetary exploration. In particular, many hyperspectral imaging applications demand real time or near real-time performance. Examples include homeland

security/defense, environmental modeling and assessment, wild-land fire tracking, biological threat detection, and monitoring of oil spills and other types of chemical contamination. Only a few parallel processing strategies for hyperspectral imagery are currently available, and most of them assume homogeneity in the underlying computing platform. In turn, heterogeneous networks of workstations (NOWs) have rapidly become a very promising computing solution which is expected to play a major role in the design of high-performance systems for many on-going and planned remote sensing missions. In order to address the need for cost-effective parallel solutions in this fast growing and emerging research area, this paper develops several highly innovative parallel algorithms for unsupervised information extraction and mining from hyperspectral image data sets, which have been specifically designed to be run in heterogeneous NOWs. The considered approaches fall into three highly representative categories: clustering, classification and spectral mixture analysis. Analytical and experimental results are presented in the context of realistic applications (based on hyperspectral data sets from the AVIRIS data repository) using several homogeneous and heterogeneous parallel computing facilities available at NASA's Goddard Space Flight Center and the University of Maryland.

99

A context-sensitive middleware for dynamic integration of mobile devices

Original Research Article Journal of Parallel and Distributed Computing, Volume 64, Issue 2, February 2004, Pages 301-317 Purchase $ 41.95 Stephen S. Yau, Fariaz Karim
with network infrastructures
Close preview | Related articles | Related reference work articles

Abstract | References

Abstract

Network infrastructures (NI), such as the Internet, grid, smart spaces, and enterprise computing environments usually consists of computing nodes that are

stationary, provide the backbone for environment sensing and high-performance computing and communication. NI, in addition, may have various types of application software for performing resource-intensive computation. On the other hand, recent advances in the embedded systems and wireless communication technologies have increased the flexibility of using mobile devices for various practical applications. Mobile devices mostly execute application software that improves the personal productivity of the user. However, despite the rapid technology advances, mobile devices are expected to be always resource poor in comparison with the computing resources in the NIs. On the other hand, the computing resources in an NI cannot readily add the flexibility to individual users due to their fixed location and size. It is therefore desirable to combine the respective strengths of mobile devices and network infrastructures (NI) whenever possible. Dynamic integration is the process using which a mobile device can detect, communicate with, and use the required resources in nearby NIs in an application-transparent way. The benefit of dynamic integration is that the applications in both mobile device and NI can interoperate with each other as if a mobile device itself is an integral part of the NI or vice versa. In this paper, a context-sensitive middleware, called Reconfigurable Context-Sensitive

Middleware (RCSM), is presented for addressing this dynamic integration problem. A novel feature of RCSM is that its dynamic integration mechanism is context-sensitive, and as such the integration between the application software in a mobile device and an NI can be restricted to specific contexts, such as a particular location or a particular time. RCSM, furthermore, provides transparency over the dynamic resource discovery and networking aspects so that application-level cohesion can be easily achieved. The integration process does not force any development-time restrictions on the application software in an NI. Our experimental results, based on the implementation of RCSM in integrated ad hoc and infrastructure-based IEEE 802.11 test bed environment, indicate that the integration process is lightweight and results in reasonably high

performance in PDA-like devices and desktop PCs.

100

Novel

hybrid for

approach improving

to

data-packet-flow

network traffic analysis Original Research Article Applied Soft Computing, Volume 9, Issue 3,Purchase $ 31.50 June 2009, Pages 1177-1183 Bao Rong Chang, Hsiu Fen Tsai
prediction
Close preview | work articles Related articles | Related reference

Abstract | Figures/Tables | References

Abstract

Forecast of the flow of data packets between client and server for a network traffic analysis is viewed as a part of web analytics. Thousands of web-smart businesses depend on web analytics to improve website

conversions, reduce marketing costs, facilitate website optimization, speed-up website monitoring and provide a higher level of service to their customers and partners. This paper particularly intends to develop a high accurate prediction as one of core component of network traffic analysis. In this study, a novel hybrid approach, combining adaptive neuro-fuzzy inference system (ANFIS) with nonlinear generalized

autoregressive

conditional

heteroscedasticity

(NGARCH), is tuned optimally by quantum minimization (QM) and then applied to forecasting the flow of data packets around website. The composite model

(QM-ANFIS/NGARCH) is setup in the forecast point of view to improve the predictive accuracy because it can resolve the problems of the overshoot and volatility

clustering simultaneously within time series. As part of real-time intelligence web analytics, the high accurate prediction will aid webmaster to improve the throughput of data-packet-flow up to around 20%, with helping each webmaster to optimize their website, maximize online marketing conversions, and lead campaign tracking.

Article Outline

1. Introduction 2. Prediction composite model ANFIS/NGARCG 2.1. Volatility clustering problem 2.2. NGARCH resolving volatility clustering 2.3. ANFIS coordinated with NGARCH to improve regression 3. Quantum minimization—an optimum search

algorithm 3.1. Quantum exponential searching algorithm 3.2. Quantum minimum searching algorithm 4. QM tuning composite model ANFIS/NGARCH 5. Experimental results and discussions 5.1. Criteria for measuring accuracy 5.2. Two experiments and their verifications 5.3. Discussions on website tracking 6. Conclusions Acknowledgements References

提交

results 76 - 100

提交

305 articles found for: pub-date > 2001 and tak((LINUX or C or Advanced or development or project or Business or communications or servicesor business or gleam or space or Underlying) and (distributed or architecture or stable or reliable or support or platform or c++ or system or design or control or SQL or database) and (RTP or RTCP or streaming or media or realize or high or concurrent or SAAS or structure) and (load or SAAS or CGI or perl or PHP or Java or "front-end" or "back-end" or processing or server or cohesion) and (across or mysql or xml or ajax or asp.net or c# or network or provides or IPPBX or PSTN or applied or optimization)) Edit this search | Save this search | Save as search alert | RSS Feed

资深网站运营经理
北京网赢天地科技文化传播有限公司
查看公司简介>>

粉丝团(23)|

公司行业: 公司性质: 公司规模:

互联网/电子商务 外资(欧美) 50-150 人

教育/培训

比比你的竞争力

发布日期: 工作年限: 职位描述 岗位要求 :

2011-04-09 五年以上

工作地点: 学 历:

北京-海淀区 硕士

招聘人数:

1

1.计算机相关专业,大学硕士及以上学历 2.从事研发或技术管理工作 5 年以上工作经验,2 年以上独立带开发团队的项目管理经验。 3.具有较全面的技术,熟悉两种以上的语言开发能力,如:java、php、C/C++等。 4.具有独立的业务/系统分析能力;至少全程负责过一个以上项目的完整需求分析、实施和管理;能把 握业务关键点并拿出相应的技术方案。 5.熟悉 linux/unix 下 web 开发及系统架构,并能独立规划包括系统架构在内的整个应用架构的设计。 6.有基于数据库应用丰富的经验,至少使用过两种以上的数据库系统进行应用研发。

7.具备良好的沟通能力,文字撰写能力,技术项目的策划及执行能力。 8.熟悉软件工程项目管理,至少担任过两个正规软件项目的技术经理或项目经理;善于团队管理,能 指导普通程序员高效开发。 9.思路清晰并有条理。 10.年龄在 30 岁至 40 岁之间。 岗位职责: 1、项目计划与控制,成本预算与控制;任务的分配和执行监控,沟通计划的制定与执行; 2、参与需求分析并撰写复杂的整体软件需求分析说明书、软件架构分析说明书、核心模块的软件详细 设计说明书; 3、负责技术开发工作中核心模块的代码编写、测试以及复杂度高的代码集成; 4、对团队其他成员的提供技术上的指导和支持,乐于与团队成员分享知识与经验;


赞助商链接

更多相关文章:
更多相关标签:

All rights reserved Powered by 甜梦文库 9512.net

copyright ©right 2010-2021。
甜梦文库内容来自网络,如有侵犯请联系客服。zhit325@126.com|网站地图