MetaPatentTactilemessagesinanextendedrealityenvironment

Patent:Tactilemessagesinanextendedrealityenvironment

PublicationNumber:20230393659

PublicationDate:2023-12-07

Assignee:MetaPlatformsTechnologies

Abstract

Techniquesforsendingandreceivingtactilemessages(e.g.,hapticemojis)inanextendedrealityenvironmenttofacilitatetouchcommunicationbetweenusers.Inoneparticularaspect,anextendedrealitysystemisprovidedhavingahead-mounteddevicewithadisplaytodisplaycontenttoafirstuser,sensorstocaptureinputdata,processors,andmemoriesaccessibletotheprocessors,thememoriesstoringinstructionsexecutablebytheprocessorstoperformprocessingincluding:capturing,usingtheoneormoresensors,theinputdatafromthefirstuser,extractingfeaturesfromtheinputdatathatcorrespondtoanelectroniccommunication,identifyinganemojifromalexiconofemojisbasedontheextractedfeatures,obtainingdigitalassetsfortheemoji,wherethedigitalassetscompriseahapticsignalconfiguredwithparameterinformationtogeneratepatternsforhapticoutput,andtransmittingthedigitalassetstoadeviceofaseconduser.

Claims

Whatisclaimedis:

1.Anextendedrealitysystemcomprising:ahead-mounteddevicecomprisingadisplaytodisplaycontenttoafirstuser,oneormoresensorstocaptureinputdataincludingimagesofavisualfieldofthefirstuser;oneormoreprocessors;andoneormorememoriesaccessibletotheoneormoreprocessors,theoneormorememoriesstoringapluralityofinstructionsexecutablebytheoneormoreprocessors,thepluralityofinstructionscomprisinginstructionsthatwhenexecutedbytheoneormoreprocessorscausetheoneormoreprocessorstoperformprocessingcomprising:capturing,usingtheoneormoresensors,theinputdatafromthefirstuser;extractingfeaturesfromtheinputdatathatcorrespondtoanelectroniccommunication;identifyinganemojifromalexiconofemojisbasedontheextractedfeatures;obtainingdigitalassetsfortheemoji,whereinthedigitalassetscompriseahapticsignalconfiguredwithparameterinformationtogeneratepatternsforhapticoutput;andtransmittingthedigitalassetstoadeviceofaseconduser.

2.Theextendedrealitysystemofclaim1,whereintheextractingthefeaturescomprises:determiningcharacteristicsoftheinputdata,andidentifyingpatternswithintheinputdatathatcorrespondtoakeyorattributesofelectroniccommunicationbasedonthecharacteristics,thekeyorattributesbeingtheextractedfeatures;andwhereintheidentifyingtheemojicomprises:constructingaqueryusingtheextractedfeaturesasparametersofthequery,andexecutingthequeryonthelexiconofemojis.

3.Theextendedrealitysystemofclaim1,whereinthehapticsignalisconfiguredwiththeparameterinformationforinterval,pitch,andamplitudetogeneratethepatternsforthehapticoutput.

4.Theextendedrealitysystemofclaim1,whereinthedigitalassetsfurthercompriseanimageorvideoasset,anaudioasset,orboth.

5.Theextendedrealitysystemofclaim1,whereintheprocessingfurthercomprisesobtainingadditionalinformationbasedontheemojiorthehapticsignal,theadditionalinformationincludesatextdescriptionofthehapticoutputconveyedbythehapticsignal,anaudiocomponentcorrespondingtothehapticsignal,animagecomponentcorrespondingtothehapticsignal,oracombinationthereof,andtransmittingtheadditionalinformationtothedeviceoftheseconduser.

6.Anextendedrealitysystemcomprising:ahead-mounteddevicecomprisingadisplaytodisplaycontenttoafirstuser,oneormoresensorstocaptureinputdataincludingimagesofavisualfieldofthefirstuser;oneormoreprocessors;andoneormorememoriesaccessibletotheoneormoreprocessors,theoneormorememoriesstoringapluralityofinstructionsexecutablebytheoneormoreprocessors,thepluralityofinstructionscomprisinginstructionsthatwhenexecutedbytheoneormoreprocessorscausetheoneormoreprocessorstoperformprocessingcomprising:capturing,usingtheoneormoresensors,theinputdatafromthefirstuser;predictingahapticemojiorahapticsignalbasedontheinputdataandmodelparameterslearnedfromhistoricalinputdataandcontextdata;andtransmittingthehapticsignalordigitalassetsforthehapticemojitoadeviceofaseconduser.

7.Theextendedrealitysystemofclaim6,whereintheprocessingfurthercomprisesobtainingthedigitalassetsforthehapticemoji,andthedigitalassetscomprisethehapticsignalconfiguredwithparameterinformationtogeneratepatternsforhapticoutput.

8.Theextendedrealitysystemofclaim6,whereinthehapticsignalisconfiguredwiththeparameterinformationforinterval,pitch,andamplitudetogeneratethepatternsforthehapticoutput.

9.Theextendedrealitysystemofclaim6,whereinthedigitalassetsfurthercompriseanimageorvideoasset,anaudioasset,orboth.

10.Theextendedrealitysystemofclaim7,whereintheprocessingfurthercomprisesobtainingadditionalinformationbasedontheemojiorthehapticsignal,theadditionalinformationincludesatextdescriptionofthehapticoutputconveyedbythehapticsignal,anaudiocomponentcorrespondingtothehapticsignal,animagecomponentcorrespondingtothehapticsignal,oracombinationthereof,andtransmittingtheadditionalinformationtothedeviceoftheseconduser.

12.Theextendedrealitysystemofclaim11,whereintheparametersoftheoneormoreactuatorsignalsincludeinformationonpressure,temperature,texture,sheerstress,time,space,oracombinationthereof.

13.Theextendedrealitysystemofclaim11,whereintheprocessingfurthercomprisespriortogeneratingtheoneormoreactuatorsignals,adjustingtheparameterinformationontheinterval,pitch,amplitude,oracombinationthereofforthehapticsignalinaccordancewithpreferencesofthefirstuser.

14.Theextendedrealitysystemofclaim12,whereintheprocessingfurthercomprisespriortogeneratingtheoneormoreactuatorsignals,adjustingtheparameterinformationonthepressure,temperature,texture,sheerstress,time,space,oracombinationthereoffortheoneormoreactuatorsignalsinaccordancewithpreferencesofthefirstuser.

15.Theextendedrealitysystemofclaim11,whereintheprocessingfurthercomprisesobtainingadditionalinformationbasedonanemojiorthehapticsignal,theadditionalinformationincludesatextdescriptionofthehapticoutputconveyedbythehapticsignal,anaudiocomponentcorrespondingtothehapticsignal,animagecomponentcorrespondingtothehapticsignal,oracombinationthereof,andthehapticoutputisgeneratedwithvirtualcontent,whichisgeneratedandrenderedbythehead-mounteddeviceinanextendedrealityenvironmentdisplayedtothefirstuserbasedontheadditionalinformation.

16.Theextendedrealitysystemofclaim11,whereinthehapticsignalispredictedbasedoninputdataandmodelparameterslearnedfromhistoricalinputdataandcontextdata,andtheinputdataiscapturedfromahead-mounteddeviceoftheseconduser.

17.Theextendedrealitysystemofclaim11,whereinthehapticsignalispartofdigitalassetsobtainedforanemojiidentifiedfromalexiconofemojis.

18.Theextendedrealitysystemofclaim17,whereintheemojiisidentifiedfromalexiconofemojisbasedonextractedfeaturesfrominputdatathatcorrespondtoanelectroniccommunication,andtheinputdataiscapturedfromahead-mounteddeviceofaseconduser.

19.Theextendedrealitysystemofclaim17,whereinthedigitalassetsfurthercompriseanimageorvideoasset,anaudioasset,orboth.

20.Theextendedrealitysystemofclaim17,whereinthehapticsignalfortheemojiistransmittedtothehead-mounteddeviceofthefirstuser.

Description

CROSS-REFERENCETORELATEDAPPLICATION

Thepresentapplicationisanon-provisionalapplicationofandclaimsthebenefitandpriorityunder35U.S.C.119(e)ofU.S.ProvisionalApplicationNo.63/365,689,filedJun.1,2022,theentirecontentsofwhichisincorporatedhereinbyreferenceforallpurposes.

FIELD

Thepresentdisclosurerelatesgenerallytohapticcommunicationinanextendedrealityenvironment,andmoreparticularly,totechniquesforsendingandreceivingtactilemessages(e.g.,hapticemojis)inanextendedrealityenvironmenttofacilitatetouchcommunicationbetweenusers.

BACKGROUND

BRIEFSUMMARY

Techniquesdisclosedhereinrelategenerallytohapticcommunicationinanextendedrealityenvironment.Morespecificallyandwithoutlimitation,techniquesdisclosedhereinrelatetosendingandreceivingtactilemessages(e.g.,hapticemojis)inanextendedrealityenvironmenttofacilitatetouchcommunicationbetweenusers.Hapticemojisorreactionsaretactilemessagesthatcanbesentandreceivedthroughoutthedaywithawearabledevice(e.g.,hapticgloveorwristband).Eachhapticemojiorreactionmaybeaccompaniedbyaudioand/orvisualcomponentstohelptrainauseronthehapticsignals.Thetactilemessagescanbesentthroughtraditionaluserinterfaces,hapticfirstinterfaces,ormoreexpressivegesturessuchasahandwave,whereinthisexampletherecipientmayfeelahapticpatterntomimicawavemotion.

Invariousembodiments,anextendedrealitysystemisprovidedthatincludes:ahead-mounteddevicecomprisingadisplaytodisplaycontenttoafirstuser,oneormoresensorstocaptureinputdataincludingimagesofavisualfieldofthefirstuser;oneormoreprocessors,andoneormorememoriesaccessibletotheoneormoreprocessors,theoneormorememoriesstoringapluralityofinstructionsexecutablebytheoneormoreprocessors,thepluralityofinstructionscomprisinginstructionsthatwhenexecutedbytheoneormoreprocessorscausetheoneormoreprocessorstoperformprocessingcomprising:capturing,usingtheoneormoresensors,theinputdatafromthefirstuser,extractingfeaturesfromtheinputdatathatcorrespondtoanelectroniccommunication,identifyinganemojifromalexiconofemojisbasedontheextractedfeatures,obtainingdigitalassetsfortheemoji,whereinthedigitalassetscompriseahapticsignalconfiguredwithparameterinformationtogeneratepatternsforhapticoutput,andtransmittingthedigitalassetstoadeviceofaseconduser.

Insomeembodiments,theextractingthefeaturescomprises:determiningcharacteristicsoftheinputdata,andidentifyingpatternswithintheinputdatathatcorrespondtoakeyorattributesofelectroniccommunicationbasedonthecharacteristics,thekeyorattributesbeingtheextractedfeatures,andtheidentifyingtheemojicomprises:constructingaqueryusingtheextractedfeaturesasparametersofthequery,andexecutingthequeryonthelexiconofemojis.

Insomeembodiments,thehapticsignalisconfiguredwiththeparameterinformationforinterval,pitch,andamplitudetogeneratethepatternsforthehapticoutput.

Insomeembodiments,thedigitalassetsfurthercompriseanimageorvideoasset,anaudioasset,orboth.

Insomeembodiments,theprocessingfurthercomprisesobtainingadditionalinformationbasedontheemojiorthehapticsignal,theadditionalinformationincludesatextdescriptionofthehapticoutputconveyedbythehapticsignal,anaudiocomponentcorrespondingtothehapticsignal,animagecomponentcorrespondingtothehapticsignal,oracombinationthereof,andtransmittingtheadditionalinformationtothedeviceoftheseconduser.

Invariousembodiments,anextendedrealitysystemisprovidedthatincludes:ahead-mounteddevicecomprisingadisplaytodisplaycontenttoafirstuser,oneormoresensorstocaptureinputdataincludingimagesofavisualfieldofthefirstuser,oneormoreprocessors,andoneormorememoriesaccessibletotheoneormoreprocessors,theoneormorememoriesstoringapluralityofinstructionsexecutablebytheoneormoreprocessors,thepluralityofinstructionscomprisinginstructionsthatwhenexecutedbytheoneormoreprocessorscausetheoneormoreprocessorstoperformprocessingcomprising:capturing,usingtheoneormoresensors,theinputdatafromthefirstuser,predictingahapticemojiorahapticsignalbasedontheinputdataandmodelparameterslearnedfromhistoricalinputdataandcontextdata,andtransmittingthehapticsignalordigitalassetsforthehapticemojitoadeviceofaseconduser.

Insomeembodiments,thehapticemojiispredictedandtheprocessingfurthercomprisesobtainingthedigitalassetsforthehapticemoji,andthedigitalassetscomprisethehapticsignalconfiguredwithparameterinformationtogeneratepatternsforhapticoutput.

Insomeembodiments,theparametersoftheoneormoreactuatorsignalsincludeinformationonpressure,temperature,texture,sheerstress,time,space,oracombinationthereof.

Insomeembodiments,theprocessingfurthercomprisespriortogeneratingtheoneormoreactuatorsignals,adjustingtheparameterinformationontheinterval,pitch,amplitude,oracombinationthereofforthehapticsignalinaccordancewithpreferencesofthefirstuser.

Insomeembodiments,theprocessingfurthercomprisespriortogeneratingtheoneormoreactuatorsignals,adjustingtheparameterinformationonthepressure,temperature,texture,sheerstress,time,space,oracombinationthereoffortheoneormoreactuatorsignalsinaccordancewithpreferencesofthefirstuser.

Insomeembodiments,theprocessingfurthercomprisesobtainingadditionalinformationbasedonanemojiorthehapticsignal,theadditionalinformationincludesatextdescriptionofthehapticoutputconveyedbythehapticsignal,anaudiocomponentcorrespondingtothehapticsignal,animagecomponentcorrespondingtothehapticsignal,oracombinationthereof,andthehapticoutputisgeneratedwithvirtualcontent,whichisgeneratedandrenderedbythehead-mounteddeviceinanextendedrealityenvironmentdisplayedtothefirstuserbasedontheadditionalinformation.

Someembodimentsofthepresentdisclosureincludeacomputer-implementedmethodcomprisingpartorallofoneormoremethodsand/orpartorallofoneormoreprocessesdisclosedherein.

Someembodimentsofthepresentdisclosureincludeasystemincludingoneormoredataprocessors.Insomeembodiments,thesystemincludesanon-transitorycomputerreadablestoragemediumcontaininginstructionswhich,whenexecutedontheoneormoredataprocessors,causetheoneormoredataprocessorstoperformpartorallofoneormoremethodsand/orpartorallofoneormoreprocessesdisclosedherein.

Someembodimentsofthepresentdisclosureincludeacomputer-programproducttangiblyembodiedinanon-transitorymachine-readablestoragemedium,includinginstructionsconfiguredtocauseoneormoredataprocessorstoperformpartorallofoneormoremethodsand/orpartorallofoneormoreprocessesdisclosedherein.

BRIEFDESCRIPTIONOFTHEDRAWINGS

FIG.1isasimplifiedblockdiagramofanetworkenvironmentinaccordancewithvariousembodiments.

FIG.2Aanillustrationdepictinganexampleextendedrealitysystemthatpresentsandcontrolsuserinterfaceelementswithinanextendedrealityenvironmentinaccordancewithvariousembodiments.

FIG.2Banillustrationdepictinguserinterfaceelementsinaccordancewithvariousembodiments.

FIG.3Aisanillustrationofanaugmentedrealitysysteminaccordancewithvariousembodiments.

FIG.3Bisanillustrationofavirtualrealitysysteminaccordancewithvariousembodiments.

FIG.4Aisanillustrationofhapticdevicesinaccordancewithvariousembodiments.

FIG.4Bisanillustrationofanexemplaryvirtualrealityenvironmentinaccordancewithvariousembodiments.

FIG.4Cisanillustrationofanexemplaryaugmentedrealityenvironmentinaccordancewithvariousembodiments.

FIG.5isasimplifiedblockdiagramofasocialcommunicationplatforminaccordancewithvariousembodiments.

FIG.6Aisasimplifiedblockdiagramillustratingasocialcommunicationsystemforconvertinginputdatatohapticoutputusingalexiconofemojisinaccordancewithvariousembodiments.

FIG.6Bisanillustrationofdigitalassetsforalexiconofemojisinaccordancewithvariousembodiments.

FIG.6Cisanillustrationofdigitalassetsforalexiconofemojisinaccordancewithvariousembodiments.

FIG.7isaflowchartillustratingaprocessforconvertinginputdatatohapticoutputusingalexiconofemojisinaccordancewithvariousembodiments.

FIG.8isasimplifiedblockdiagramillustratingamachine-learningpredictionsysteminaccordancewithvariousembodiments.

FIG.9isaflowchartillustratingaprocesstopredicthapticemojisforconveyingatouchmessageinaccordancewithvariousembodiments.

FIG.10isasimplifiedblockdiagramillustratingasocialcommunicationsystemforsupplementingahapticsignalwithadditionalinformationtofacilitateauserlearningahapticoutputinaccordancewithvariousembodiments.

FIG.11isaflowchartillustratingaprocessforsupplementingahapticsignalwithadditionalinformationtofacilitateauserlearningahapticoutputinaccordancewithvariousembodiments.

FIG.12isasimplifiedblockdiagramillustratingasignalgeneratorforoperatingcutaneousactuatorstodeliverhapticoutput(tactilefeedback)toauserinaccordancewithvariousembodiments.

FIG.13isaflowchartillustratingaprocessforgeneratingahapticoutputinaccordancewithvariousembodiments.

DETAILEDDESCRIPTION

Inthefollowingdescription,forthepurposesofexplanation,specificdetailsaresetforthinordertoprovideathoroughunderstandingofcertainembodiments.However,itwillbeapparentthatvariousembodimentsmaybepracticedwithoutthesespecificdetails.Thefiguresanddescriptionarenotintendedtoberestrictive.Theword“exemplary”isusedhereintomean“servingasanexample,instance,orillustration.”Anyembodimentordesigndescribedhereinas“exemplary”isnotnecessarilytobeconstruedaspreferredoradvantageousoverotherembodimentsordesigns.

INTRODUCTION

Inanotherexemplaryembodiment,anextendedrealitysystemisprovidedcomprising:ahead-mounteddevicecomprisingadisplaytodisplaycontenttoafirstuser,oneormoresensorstocaptureinputdataincludingimagesofavisualfieldofthefirstuser;oneormoreprocessors;andoneormorememoriesaccessibletotheoneormoreprocessors,theoneormorememoriesstoringapluralityofinstructionsexecutablebytheoneormoreprocessors,thepluralityofinstructionscomprisinginstructionsthatwhenexecutedbytheoneormoreprocessorscausetheoneormoreprocessorstoperformprocessingcomprising:capturing,usingtheoneormoresensors,theinputdatafromthefirstuser;predictingahapticemojiorahapticsignalbasedontheinputdataandmodelparameterslearnedfromhistoricalinputdataandcontextdata;andtransmittingthehapticsignalordigitalassetsforthehapticemojitoadeviceofaseconduser.

Advantageously,thetactilemessagesaremoreexpressivethanvisualoraudiobasedmessages,andareparticularlyusefulwhenausercan'tvieworlistentovisualoraudiobasedmessages.

ExtendedRealitySystemOverview

Thisdisclosurecontemplatesanysuitablenetwork120.Asanexampleandnotbywayoflimitation,oneormoreportionsofanetwork120mayincludeanadhocnetwork,anintranet,anextranet,avirtualprivatenetwork(VPN),alocalareanetwork(LAN),awirelessLAN(WLAN),awideareanetwork(WAN),awirelessWAN(WWAN),ametropolitanareanetwork(MAN),aportionoftheInternet,aportionofthePublicSwitchedTelephoneNetwork(PSTN),acellulartelephonenetwork,oracombinationoftwoormoreofthese.Anetwork120mayincludeoneormorenetworks120.

Links125mayconnectaclientsystem105,avirtualassistantengine110,andaremotesystem115toacommunicationnetwork110ortoeachother.Thisdisclosurecontemplatesanysuitablelinks125.Inparticularembodiments,oneormorelinks125includeoneormorewireline(suchasforexampleDigitalSubscriberLine(DSL)orDataOverCableServiceInterfaceSpecification(DOCSIS)),wireless(suchasforexampleWi-FiorWorldwideInteroperabilityforMicrowaveAccess(WiMAX)),oroptical(suchasforexampleSynchronousOpticalNetwork(SONET)orSynchronousDigitalHierarchy(SDH))links.Inparticularembodiments,oneormorelinks125eachincludeanadhocnetwork,anintranet,anextranet,aVPN,aLAN,aWLAN,aWAN,aWWAN,aMAN,aportionoftheInternet,aportionofthePSTN,acellulartechnology-basednetwork,asatellitecommunicationstechnology-basednetwork,anotherlink125,oracombinationoftwoormoresuchlinks125.Links125neednotnecessarilybethesamethroughoutanetworkenvironment100.Oneormorefirstlinks125maydifferinoneormorerespectsfromoneormoresecondlinks125.

Invariousembodiments,aclientsystem105isanelectronicdeviceincludinghardware,software,orembeddedlogiccomponentsoracombinationoftwoormoresuchcomponentsandcapableofcarryingouttheappropriateextendedrealityfunctionalitiesinaccordancewithtechniquesofthedisclosure.Asanexample,andnotbywayoflimitation,aclientsystem105mayincludeadesktopcomputer,notebookorlaptopcomputer,netbook,atabletcomputer,e-bookreader,GPSdevice,camera,personaldigitalassistant(PDA),handheldelectronicdevice,cellulartelephone,smartphone,aVR.MR,AR,orVRheadsetsuchasanAR/VRHMD,othersuitableelectronicdevicecapableofdisplayingextendedrealitycontent,oranysuitablecombinationthereof.Inparticularembodiments,theclientsystem105isanAR/VRHMDasdescribedindetailwithrespecttoFIG.2.Thisdisclosurecontemplatesanysuitableclientsystem105configuredtogenerateandoutputextendedrealitycontenttotheuser.Theclientsystem105mayenableitsusertocommunicatewithotherusersatotherclientsystems105.

Auserattheclientsystem105mayusethevirtualassistantapplication130tointeractwiththevirtualassistantengine110.Insomeinstances,thevirtualassistantapplication130isastand-aloneapplicationorintegratedintoanotherapplicationsuchasasocial-networkingapplicationoranothersuitableapplication(e.g.,anartificialsimulationapplication).Insomeinstances,thevirtualassistantapplication130isintegratedintotheclientsystem105(e.g.,partoftheoperatingsystemoftheclientsystem105),anassistanthardwaredevice,oranyothersuitablehardwaredevices.Insomeinstances,thevirtualassistantapplication130maybeaccessedviaawebbrowser135.Insomeinstances,thevirtualassistantapplication130passivelylistenstoandwatchesinteractionsoftheuserinthereal-world,andprocesseswhatithearsandsees(e.g.,explicitinputsuchasaudiocommandsorinterfacecommands,contextualawarenessderivedfromaudioorphysicalactionsoftheuser,objectsinthereal-world,environmentaltriggerssuchasweatherortime,andthelike)inordertointeractwiththeuserinanintuitivemanner.

Invariousembodiments,aremotesystem115mayincludeoneormoretypesofservers,oneormoredatastores,oneormoreinterfaces,includingbutnotlimitedtoAPIs,oneormorewebservices,oneormorecontentsources,oneormorenetworks,oranyothersuitablecomponents,e.g.,thatserversmaycommunicatewith.Aremotesystem115maybeoperatedbyasameentityoradifferententityfromanentityoperatingthevirtualassistantengine110.Inparticularembodiments,however,thevirtualassistantengine110andthird-partysystems115mayoperateinconjunctionwitheachothertoprovidevirtualcontenttousersoftheclientsystem105.Forexample,asocial-networkingsystem145mayprovideaplatform,orbackbone,whichothersystems,suchasthird-partysystems,mayusetoprovidesocial-networkingservicesandfunctionalitytousersacrosstheInternet,andthevirtualassistantengine110mayaccessthesesystemstoprovidevirtualcontentontheclientsystem105.

Theremotesystem115mayincludeacontentobjectprovider150.Acontentobjectprovider150includesoneormoresourcesofvirtualcontentobjects,whichmaybecommunicatedtotheclientsystem105.Asanexample,andnotbywayoflimitation,virtualcontentobjectsmayincludeinformationregardingthingsoractivitiesofinteresttotheuser,suchas,forexample,movieshowtimes,moviereviews,restaurantreviews,restaurantmenus,productinformationandreviews,instructionsonhowtoperformvarioustasks,exerciseregimens,cookingrecipes,orothersuitableinformation.Asanotherexampleandnotbywayoflimitation,contentobjectsmayincludeincentivecontentobjects,suchascoupons,discounttickets,giftcertificates,orothersuitableincentiveobjects.Asanotherexampleandnotbywayoflimitation,contentobjectsmayincludevirtualobjectssuchasvirtualinterfaces,2Dor3Dgraphics,mediacontent,orothersuitablevirtualobjects.

IntheexampleshowninFIG.2A,virtualinformationorobjects240,245aremappedatapositionrelativetoaphysicalobject235.Asshouldbeunderstood,thevirtualimagery(e.g.,virtualcontentsuchasinformationorobjects240,245andvirtualuserinterface250)doesnotexistinthereal-world,physicalenvironment.Virtualuserinterface250maybefixed,asrelativetotheuser220,theuser'shand230,physicalobjects235,orothervirtualcontentsuchasvirtualinformationorobjects240,245,forinstance.Asaresult,clientsystem200renders,atauserinterfacepositionthatislockedrelativetoapositionoftheuser220,theuser'shand230,physicalobjects235,orothervirtualcontentintheextendedrealityenvironment,virtualuserinterface250fordisplayatextendedrealitysystem205aspartofextendedrealitycontent225.Asusedherein,avirtualelement‘locked’toapositionofvirtualcontentorphysicalobjectisrenderedatapositionrelativetothepositionofthevirtualcontentorphysicalobjectsoastoappeartobepartoforotherwisetiedintheextendedrealityenvironmenttothevirtualcontentorphysicalobject.

Clientsystem200maytriggergenerationandrenderingofvirtualcontentbasedonacurrentfieldofviewofuser220,asmaybedeterminedbyreal-timegaze255trackingoftheuser,orotherconditions.Morespecifically,imagecapturedevicesofthesensors215captureimagedatarepresentativeofobjectsintherealworld,physicalenvironmentthatarewithinafieldofviewofimagecapturedevices.Duringoperation,theclientsystem200performsobjectrecognitionwithinimagedatacapturedbytheimagecapturedevicesofextendedrealitysystem205toidentifyobjectsinthephysicalenvironmentsuchastheuser220,theuser'shand230,and/orphysicalobjects235.Further,theclientsystem200trackstheposition,orientation,andconfigurationoftheobjectsinthephysicalenvironmentoveraslidingwindowoftime.Fieldofviewtypicallycorrespondswiththeviewingperspectiveoftheextendedrealitysystem205.Insomeexamples,theextendedrealityapplicationpresentsextendedrealitycontent225comprisingmixedrealityand/oraugmentedreality.

Variousembodimentsdisclosedhereinmayincludeorbeimplementedinconjunctionwithvarioustypesofextendedrealitysystems.Extendedrealitycontentgeneratedbytheextendedrealitysystemsmayincludecompletelycomputer-generatedcontentorcomputer-generatedcontentcombinedwithcaptured(e.g.,real-world)content.Theextendedrealitycontentmayincludevideo,audio,hapticfeedback,orsomecombinationthereof,anyofwhichmaybepresentedinasinglechannelorinmultiplechannels(suchasstereovideothatproducesathree-dimensional(3D)effecttotheviewer).Additionally,insomeembodiments,extendedrealitymayalsobeassociatedwithapplications,products,accessories,services,orsomecombinationthereof,thatareusedto,forexample,createcontentinanextendedrealityand/orareotherwiseusedin(e.g.,toperformactivitiesin)anextendedreality.

Theextendedrealitysystemsmaybeimplementedinavarietyofdifferentformfactorsandconfigurations.Someextendedrealitysystemsmaybedesignedtoworkwithoutnear-eyedisplays(NEDs).OtherextendedrealitysystemsmayincludeanNEDthatalsoprovidesvisibilityintotherealworld(suchas,e.g.,augmentedrealitysystem300inFIG.3A)orthatvisuallyimmersesauserinanextendedreality(suchas,e.g.,virtualrealitysystem350inFIG.3B).Whilesomeextendedrealitydevicesmaybeself-containedsystems,otherextendedrealitydevicesmaycommunicateand/orcoordinatewithexternaldevicestoprovideanextendedrealityexperiencetoauser.Examplesofsuchexternaldevicesincludehandheldcontrollers,mobiledevices,desktopcomputers,deviceswornbyauser,deviceswornbyoneormoreotherusers,and/oranyothersuitableexternalsystem.

AsshowninFIG.3A,augmentedrealitysystem300mayincludeaneyeweardevice305withaframe310configuredtoholdaleftdisplaydevice315(A)andarightdisplaydevice315(B)infrontofauser'seyes.Displaydevices315(A)and315(B)mayacttogetherorindependentlytopresentanimageorseriesofimagestoauser.Whileaugmentedrealitysystem300includestwodisplays,embodimentsofthisdisclosuremaybeimplementedinaugmentedrealitysystemswithasingleNEDormorethantwoNEDs.

Insomeembodiments,augmentedrealitysystem300mayincludeoneormoresensors,suchassensor320.Sensor320maygeneratemeasurementsignalsinresponsetomotionofaugmentedrealitysystem300andmaybelocatedonsubstantiallyanyportionofframe310.Sensor320mayrepresentoneormoreofavarietyofdifferentsensingmechanisms,suchasapositionsensor,aninertialmeasurementunit(IMU),adepthcameraassembly,astructuredlightemitterand/ordetector,oranycombinationthereof.Insomeembodiments,augmentedrealitysystem300mayormaynotincludesensor320ormayincludemorethanonesensor.Inembodimentsinwhichsensor320includesanIMU,theIMUmaygeneratecalibrationdatabasedonmeasurementsignalsfromsensor320.Examplesofsensor320mayinclude,withoutlimitation,accelerometers,gyroscopes,magnetometers,othersuitabletypesofsensorsthatdetectmotion,sensorsusedforerrorcorrectionoftheIMU,orsomecombinationthereof.

Insomeexamples,augmentedrealitysystem300mayalsoincludeamicrophonearraywithapluralityofacoustictransducers325(A)-325(J),referredtocollectivelyasacoustictransducers325.Acoustictransducers325mayrepresenttransducersthatdetectairpressurevariationsinducedbysoundwaves.Eachacoustictransducer325maybeconfiguredtodetectsoundandconvertthedetectedsoundintoanelectronicformat(e.g.,ananalogordigitalformat).ThemicrophonearrayinFIG.3Amayinclude,forexample,tenacoustictransducers:325(A)and325(B),whichmaybedesignedtobeplacedinsideacorrespondingearoftheuser,acoustictransducers325(C),325(D),325(E),325(F),325(G),and325(H),whichmaybepositionedatvariouslocationsonframe310,and/oracoustictransducers325(I)and325(J),whichmaybepositionedonacorrespondingneckband330.

Insomeembodiments,oneormoreofacoustictransducers325(A)-(J)maybeusedasoutputtransducers(e.g.,speakers).Forexample,acoustictransducers325(A)and/or325(B)maybeearbudsoranyothersuitabletypeofheadphoneorspeaker.Theconfigurationofacoustictransducers325ofthemicrophonearraymayvary.Whileaugmentedrealitysystem300isshowninFIG.3ashavingtenacoustictransducers325,thenumberofacoustictransducers325maybegreaterorlessthanten.Insomeembodiments,usinghighernumbersofacoustictransducers325mayincreasetheamountofaudioinformationcollectedand/orthesensitivityandaccuracyoftheaudioinformation.Incontrast,usingalowernumberofacoustictransducers325maydecreasethecomputingpowerrequiredbyanassociatedcontroller335toprocessthecollectedaudioinformation.Inaddition,thepositionofeachacoustictransducer325ofthemicrophonearraymayvary.Forexample,thepositionofanacoustictransducer325mayincludeadefinedpositionontheuser,adefinedcoordinateonframe310,anorientationassociatedwitheachacoustictransducer325,orsomecombinationthereof.

Acoustictransducers325(A)and325(B)maybepositionedondifferentpartsoftheuser'sear,suchasbehindthepinna,behindthetragus,and/orwithintheauricleorfossa.Or,theremaybeadditionalacoustictransducers325onorsurroundingtheearinadditiontoacoustictransducers325insidetheearcanal.Havinganacoustictransducer325positionednexttoanearcanalofausermayenablethemicrophonearraytocollectinformationonhowsoundsarriveattheearcanal.Bypositioningatleasttwoofacoustictransducers325oneithersideofauser'shead(e.g.,asbinauralmicrophones),augmentedrealitysystem300maysimulatebinauralhearingandcapturea3Dstereosoundfieldaroundaboutauser'shead.Insomeembodiments,acoustictransducers325(A)and325(B)maybeconnectedtoaugmentedrealitysystem300viaawiredconnection340,andinotherembodimentsacoustictransducers325(A)and325(B)maybeconnectedtoaugmentedrealitysystem300viaawirelessconnection(e.g.,aBluetoothconnection).Instillotherembodiments,acoustictransducers325(A)and325(B)maynotbeusedatallinconjunctionwithaugmentedrealitysystem300.

Acoustictransducers325onframe310maybepositionedinavarietyofdifferentways,includingalongthelengthofthetemples,acrossthebridge,aboveorbelowdisplaydevices315(A)and315(B),orsomecombinationthereof.Acoustictransducers325mayalsobeorientedsuchthatthemicrophonearrayisabletodetectsoundsinawiderangeofdirectionssurroundingtheuserwearingtheaugmentedrealitysystem300.Insomeembodiments,anoptimizationprocessmaybeperformedduringmanufacturingofaugmentedrealitysystem300todeterminerelativepositioningofeachacoustictransducer325inthemicrophonearray.

Insomeexamples,augmentedrealitysystem300mayincludeorbeconnectedtoanexternaldevice(e.g.,apaireddevice),suchasneckband330.Neckband330generallyrepresentsanytypeorformofpaireddevice.Thus,thefollowingdiscussionofneckband330mayalsoapplytovariousotherpaireddevices,suchaschargingcases,smartwatches,smartphones,wristbands,otherwearabledevices,hand-heldcontrollers,tabletcomputers,laptopcomputers,otherexternalcomputedevices,etc.

Asshown,neckband330maybecoupledtoeyeweardevice305viaoneormoreconnectors.Theconnectorsmaybewiredorwirelessandmayincludeelectricaland/ornon-electrical(e.g.,structural)components.Insomecases,eyeweardevice305andneckband330mayoperateindependentlywithoutanywiredorwirelessconnectionbetweenthem.WhileFIG.3Aillustratesthecomponentsofeyeweardevice305andneckband330inexamplelocationsoneyeweardevice305andneckband330,thecomponentsmaybelocatedelsewhereand/ordistributeddifferentlyoneyeweardevice305and/orneckband330.Insomeembodiments,thecomponentsofeyeweardevice305andneckband330maybelocatedononeormoreadditionalperipheraldevicespairedwitheyeweardevice305,neckband330,orsomecombinationthereof.

Neckband330maybecommunicativelycoupledwitheyeweardevice305and/ortootherdevices.Theseotherdevicesmayprovidecertainfunctions(e.g.,tracking,localizing,depthmapping,processing,storage,etc.)toaugmentedrealitysystem300.IntheembodimentofFIG.3A,neckband330mayincludetwoacoustictransducers(e.g.,325(I)and325(J))thatarepartofthemicrophonearray(orpotentiallyformtheirownmicrophonesubarray).Neckband330mayalsoincludeacontroller342andapowersource345.

Acoustictransducers325(I)and325(J)ofneckband330maybeconfiguredtodetectsoundandconvertthedetectedsoundintoanelectronicformat(analogordigital).IntheembodimentofFIG.3A,acoustictransducers325(I)and325(J)maybepositionedonneckband330,therebyincreasingthedistancebetweentheneckbandacoustictransducers325(I)and325(J)andotheracoustictransducers325positionedoneyeweardevice305.Insomecases,increasingthedistancebetweenacoustictransducers325ofthemicrophonearraymayimprovetheaccuracyofbeamformingperformedviathemicrophonearray.Forexample,ifasoundisdetectedbyacoustictransducers325(C)and325(D)andthedistancebetweenacoustictransducers325(C)and325(D)isgreaterthan,e.g.,thedistancebetweenacoustictransducers325(D)and325(E),thedeterminedsourcelocationofthedetectedsoundmaybemoreaccuratethanifthesoundhadbeendetectedbyacoustictransducers325(D)and325(E).

Powersource345inneckband330mayprovidepowertoeyeweardevice305and/ortoneckband330.Powersource345mayinclude,withoutlimitation,lithium-ionbatteries,lithium-polymerbatteries,primarylithiumbatteries,alkalinebatteries,oranyotherformofpowerstorage.Insomecases,powersource345maybeawiredpowersource.Includingpowersource345onneckband330insteadofoneyeweardevice305mayhelpbetterdistributetheweightandheatgeneratedbypowersource345.

Asnoted,someextendedrealitysystemsmay,insteadofblendinganextendedrealitywithactualreality,substantiallyreplaceoneormoreofauser'ssensoryperceptionsoftherealworldwithavirtualexperience.Oneexampleofthistypeofsystemisahead-worndisplaysystem,suchasvirtualrealitysystem350inFIG.3B,thatmostlyorcompletelycoversauser'sfieldofview.Virtualrealitysystem350mayincludeafrontrigidbody355andaband360shapedtofitaroundauser'shead.Virtualrealitysystem1700mayalsoincludeoutputaudiotransducers365(A)and365(B).Furthermore,whilenotshowninFIG.3B,frontrigidbody355mayincludeoneormoreelectronicelements,includingoneormoreelectronicdisplays,oneormoreinertialmeasurementunits(IMUs),oneormoretrackingemittersordetectors,and/oranyothersuitabledeviceorsystemforcreatinganextendedrealityexperience.

Inadditiontoorinsteadofusingdisplayscreens,someoftheextendedrealitysystemsdescribedhereinmayincludeoneormoreprojectionsystems.Forexample,displaydevicesinaugmentedrealitysystem300and/orvirtualrealitysystem350mayincludemicro-LEDprojectorsthatprojectlight(using,e.g.,awaveguide)intodisplaydevices,suchasclearcombinerlensesthatallowambientlighttopassthrough.Thedisplaydevicesmayrefracttheprojectedlighttowardauser'spupilandmayenableausertosimultaneouslyviewbothextendedrealitycontentandtherealworld.Thedisplaydevicesmayaccomplishthisusinganyofavarietyofdifferentopticalcomponents,includingwaveguidecomponents(e.g.,holographic,planar,diffractive,polarized,and/orreflectivewaveguideelements),light-manipulationsurfacesandelements(suchasdiffractive,reflective,andrefractiveelementsandgratings),couplingelements,etc.Extendedrealitysystemsmayalsobeconfiguredwithanyothersuitabletypeorformofimageprojectionsystem,suchasretinalprojectorsusedinvirtualretinadisplays.

Theextendedrealitysystemsdescribedhereinmayalsoincludevarioustypesofcomputervisioncomponentsandsubsystems.Forexample,augmentedrealitysystem300and/orvirtualrealitysystem350mayincludeoneormoreopticalsensors,suchastwo-dimensional(2D)or3Dcameras,structuredlighttransmittersanddetectors,time-of-flightdepthsensors,single-beamorsweepinglaserrangefinders,3DLiDARsensors,and/oranyothersuitabletypeorformofopticalsensor.Anextendedrealitysystemmayprocessdatafromoneormoreofthesesensorstoidentifyalocationofauser,tomaptherealworld,toprovideauserwithcontextaboutreal-worldsurroundings,and/ortoperformavarietyofotherfunctions.

Theextendedrealitysystemsdescribedhereinmayalsoincludeoneormoreinputand/oroutputaudiotransducers.Outputaudiotransducersmayincludevoicecoilspeakers,ribbonspeakers,electrostaticspeakers,piezoelectricspeakers,boneconductiontransducers,cartilageconductiontransducers,tragus-vibrationtransducers,and/oranyothersuitabletypeorformofaudiotransducer.Similarly,inputaudiotransducersmayincludecondensermicrophones,dynamicmicrophones,ribbonmicrophones,and/oranyothertypeorformofinputtransducer.Insomeembodiments,asingletransducermaybeusedforbothaudioinputandaudiooutput.

Insomeembodiments,theextendedrealitysystemsdescribedhereinmayalsoincludetactile(e.g.,haptic)feedbacksystems,whichmaybeincorporatedintoheadwear,gloves,bodysuits,handheldcontrollers,environmentaldevices(e.g.,chairs,floormats,etc.),and/oranyothertypeofdeviceorsystem.Hapticfeedbacksystemsmayprovidevarioustypesofcutaneousfeedback,includingvibration,force,traction,texture,and/ortemperature.Hapticfeedbacksystemsmayalsoprovidevarioustypesofkinestheticfeedback,suchasmotionandcompliance.Hapticfeedbackmaybeimplementedusingmotors,piezoelectricactuators,fluidicsystems,and/oravarietyofothertypesoffeedbackmechanisms.Hapticfeedbacksystemsmaybeimplementedindependentofotherextendedrealitydevices,withinotherextendedrealitydevices,and/orinconjunctionwithotherextendedrealitydevices.

Byprovidinghapticsensations,audiblecontent,and/orvisualcontent,extendedrealitysystemsmaycreateanentirevirtualexperienceorenhanceauser'sreal-worldexperienceinavarietyofcontextsandenvironments.Forinstance,extendedrealitysystemsmayassistorextendauser'sperception,memory,orcognitionwithinaparticularenvironment.Somesystemsmayenhanceauser'sinteractionswithotherpeopleintherealworldormayenablemoreimmersiveinteractionswithotherpeopleinavirtualworld.Extendedrealitysystemsmayalsobeusedforeducationalpurposes(e.g.,forteachingortraininginschools,hospitals,governmentorganizations,militaryorganizations,businessenterprises,etc.),entertainmentpurposes(e.g.,forplayingvideogames,listeningtomusic,watchingvideocontent,etc.),and/orforaccessibilitypurposes(e.g.,ashearingaids,visualaids,etc.).Theembodimentsdisclosedhereinmayenableorenhanceauser'sextendedrealityexperienceinoneormoreofthesecontextsandenvironmentsand/orinothercontextsandenvironments.

Asnoted,extendedrealitysystems300and350maybeusedwithavarietyofothertypesofdevicestoprovideamorecompellingextendedrealityexperience.Thesedevicesmaybehapticinterfaceswithtransducersthatprovidehapticfeedbackand/orthatcollecthapticinformationaboutauser'sinteractionwithanenvironment.Theextendedrealitysystemsdisclosedhereinmayincludevarioustypesofhapticinterfacesthatdetectorconveyvarioustypesofhapticinformation,includingtactilefeedback(e.g.,feedbackthatauserdetectsvianervesintheskin,whichmayalsobereferredtoascutaneousfeedback)and/orkinestheticfeedback(e.g.,feedbackthatauserdetectsviareceptorslocatedinmuscles,joints,and/ortendons).

Oneormorevibrotactiledevices420maybepositionedatleastpartiallywithinoneormorecorrespondingpocketsformedintextilematerial415ofvibrotactilesystem400.Vibrotactiledevices420maybepositionedinlocationstoprovideavibratingsensation(e.g.,hapticfeedback)toauserofvibrotactilesystem400.Forexample,vibrotactiledevices420maybepositionedagainsttheuser'sfinger(s),thumb,orwrist,asshowninFIG.4A.Vibrotactiledevices420may,insomeexamples,besufficientlyflexibletoconformtoorbendwiththeuser'scorrespondingbodypart(s).

Apowersource425(e.g.,abattery)forapplyingavoltagetothevibrotactiledevices420foractivationthereofmaybeelectricallycoupledtovibrotactiledevices420,suchasviaconductivewiring430.Insomeexamples,eachofvibrotactiledevices420maybeindependentlyelectricallycoupledtopowersource425forindividualactivation.Insomeembodiments,aprocessor435maybeoperativelycoupledtopowersource425andconfigured(e.g.,programmed)tocontrolactivationofvibrotactiledevices420.

Vibrotactilesystem400mayoptionallyincludeothersubsystemsandcomponents,suchastouch-sensitivepads450,pressuresensors,motionsensors,positionsensors,lightingelements,and/oruserinterfaceelements(e.g.,anon/offbutton,avibrationcontrolelement,etc.).Duringuse,vibrotactiledevices420maybeconfiguredtobeactivatedforavarietyofdifferentreasons,suchasinresponsetotheuser'sinteractionwithuserinterfaceelements,asignalfromthemotionorpositionsensors,asignalfromthetouch-sensitivepads450,asignalfromthepressuresensors,asignalfromtheotherdeviceorsystem440,etc.

Althoughpowersource425,processor435,andcommunicationsinterface445areillustratedinFIG.4Aasbeingpositionedinhapticdevice410,thepresentdisclosureisnotsolimited.Forexample,oneormoreofpowersource425,processor435,orcommunicationsinterface445maybepositionedwithinhapticdevice405orwithinanotherwearabletextile.

Hapticwearables,suchasthoseshowninanddescribedinconnectionwithFIG.4A,maybeimplementedinavarietyoftypesofextendedrealitysystemsandenvironments.FIG.4Bshowsanexampleextendedrealityenvironment460includingonehead-mountedvirtualrealitydisplayandtwohapticdevices(e.g.,gloves),andinotherembodimentsanynumberand/orcombinationofthesecomponentsandothercomponentsmaybeincludedinanextendedrealitysystem.Forexample,insomeembodimentstheremaybemultiplehead-mounteddisplayseachhavinganassociatedhapticdevice,witheachhead-mounteddisplayandeachhapticdevicecommunicatingwiththesameconsole,portablecomputingdevice,orothercomputingsystem.

Whilehapticinterfacesmaybeusedwithvirtualrealitysystems,asshowninFIG.4B,hapticinterfacesmayalsobeusedwithaugmentedrealitysystems,asshowninFIG.4C.FIG.4Cisaperspectiveviewofauser475interactingwithanaugmentedrealitysystem480.Inthisexample,user475maywearapairofaugmentedrealityglasses485thatmayhaveoneormoredisplays487andthatarepairedwithahapticdevice490.Inthisexample,hapticdevice490maybeawristbandthatincludesapluralityofbandelements492andatensioningmechanism495thatconnectsbandelements492tooneanother.

Oneormoreofbandelements492mayincludeanytypeorformofactuatorsuitableforprovidinghapticfeedback.Forexample,oneormoreofbandelements492maybeconfiguredtoprovideoneormoreofvarioustypesofcutaneousfeedback,includingvibration,force,traction,texture,and/ortemperature.Toprovidesuchfeedback,bandelements492mayincludeoneormoreofvarioustypesofactuators.Inoneexample,eachofbandelements492mayincludeavibrotactor(e.g.,avibrotactileactuator)configuredtovibrateinunisonorindependentlytoprovideoneormoreofvarioustypesofhapticsensationstoauser.Alternatively,onlyasinglebandelementorasubsetofbandelementsmayincludevibrotactors.

Hapticdevices405,410,470,and490mayincludeanysuitablenumberand/ortypeofhaptictransducer,sensor,and/orfeedbackmechanism.Forexample,hapticdevices405,410,470,and490mayincludeoneormoremechanicaltransducers,piezoelectrictransducers,and/orfluidictransducers.Hapticdevices405,410,470,and490mayalsoincludevariouscombinationsofdifferenttypesandformsoftransducersthatworktogetherorindependentlytoenhanceauser'sextendedrealityexperience.Inoneexample,eachofbandelements492ofhapticdevice490mayincludeavibrotactor(e.g.,avibrotactileactuator)configuredtovibrateinunisonorindependentlytoprovideoneormoreofvarioustypesofhapticsensationstoauser.

Insomeembodiments,thedata525obtainedviatheclientsystem505isassociatedwithoneormoreprivacysettings.Thedata525maybestoredonorotherwiseassociatedwithanysuitablecomputingsystemorapplication,suchas,forexample,asocial-networkingsystem,aclientsystem,athird-partysystem,amessagingapplication,aphoto-sharingapplication,abiometricdataacquisitionapplication,anartificial-realityapplication,avirtualassistantapplication,and/oranyothersuitablecomputingsystemorapplication.

Insomeembodiments,privacysettingsforthedata525mayspecifya“blockedlist”ofusersorotherentitiesthatshouldnotbeallowedtoaccesscertaininformationassociatedwiththedata525.Insomecases,theblockedlistmayincludethird-partyentities.Theblockedlistmayspecifyoneormoreusersorentitiesforwhichthedata525isnotvisible.

Privacysettingsassociatedwiththedata525mayspecifyanysuitablegranularityofpermittedaccessordenialofaccess.Asanexample,accessordenialofaccessmaybespecifiedforparticularusers(e.g.,onlyme,myroommates,myboss),userswithinaparticulardegree-of-separation(e.g.,friends,friends-of-friends),usergroups(e.g.,thegamingclub,myfamily),usernetworks(e.g.,employeesofparticularemployers,studentsoralumniofparticularuniversity),allusers(“public”),nousers(“private”),usersofthird-partysystems,particularapplications(e.g.,third-partyapplications,externalwebsites),othersuitableentities,oranysuitablecombinationthereof.Insomeembodiments,differentpiecesofthedata525ofthesametypeassociatedwithausermayhavedifferentprivacysettings.Inaddition,oneormoredefaultprivacysettingsmaybesetforeachpieceofdata525ofaparticulardata-type.

Althoughthesocialcommunicationplatform500isdescribedwithregardtogeneratingthehapticsignal535attheclientsystem505(a)ofthesendinguser,itshouldbeunderstoodthatthehapticsignal535canalternativelybegeneratedattheclientsystem505(b)ofthereceivinguseroracompletelydifferentremotesystem(e.g.,adistributedsocialnetworkingsystem)usingsimilarcomponentsandtechniquesdescribedherein.Moreover,thesocialcommunicationplatform500illustratesaone-wayhapticcommunicationwherethesendingusersendsahapticsignaltothereceivinguser,howeveritshouldbeunderstoodthatthehapticcommunicationcanbebidirectionalandtheclientsystem505(b)ofthereceivingusercouldhavesimilarcomponentsasdescribedwithrespecttotheclientsystem505(a)ofthesendinguserandlikewisetheclientsystem505(a)ofthesendingusercouldhavesimilarcomponentsasdescribedwithrespecttotheclientsystem505(b)ofthereceivinguser.Further,asendingusercanbroadcastthehapticsignalvianetwork540toapluralityofclientsystems505(b-n)associatedwithreceivingusersinsteadofasinglereceivinguser.

TouchCommunicationTechniques

TouchCommunicationUsingaLexiconofEmojis

FIG.6Aisablockdiagramillustratingcomponentsofasocialcommunicationsystem600forconvertinginputdata605tohapticoutput610usingalexiconofemojis615inaccordancewithvariousembodiments.Togeneratethehapticoutput610,inputdata605fromafirstuser(sendinguser)isprocessedbyanalgorithmusingthelexiconofemojis615toobtainacorrespondinghapticsignalthatistransmittedtoaseconduser(receivinguser)tooperatethehapticfeedbackdevice.Thehapticfeedbackdevicereceivesthetransmittedhapticsignals,translatesthehapticsignalsintothehapticoutput610,andtransmitsthehapticoutput610correspondingtothereceivedhapticsignalstoabodyoftheseconduser.

Insomeinstances,thelexiconofemojis615maybekey-valuestore,orkey-valuedatabase,whichisatypeofdatastoragesoftwareprogramthatstoresdataasasetofuniqueidentifiers,eachofwhichhaveanassociatedvalue.Thisdatapairingisknownasa“key-valuepair.”Theuniqueidentifieristhe“key”foranitemofdata,andavalueiseitherthedatabeingidentifiedorthelocationofthatdata.Although,thelexiconofemojis615isdescribedhereinasakey-valuedatabaseitshouldbeunderstoodthatotherdatabasedesignscouldbeusedwithoutdepartingfromthespiritandscopeofthepresentdisclosure.Forexampleinotherinstances,thelexiconofemojis615isarelationaldatabase,wheredataisstoredintablescomposedofrowsandcolumns.Thedatabasedeveloperspecifiesattributesofthedata(i.e.,emojisandassetsthereof)tobestoredinthetableupfront.Thiscreatessignificantopportunitiesforoptimizationssuchasdatacompressionandperformancearoundaggregationsanddataaccess.Theattributesofthedatamaybequeriedinasimilarfashionaskeysinthekey-valuedatabasetoidentifyemojisassociatedwithsuchattributes.

Thelexiconofemojis615maycomprisesanynumberofemojis620(A-N).Eachoftheemojis620isconfiguredwithacorrespondingelectroniccommunicationthatincludesavisualcomponent(showninFIG.6Basthecharacterineachillustration),anaudiocomponent(showninFIG.6Bastheverbalutteranceineachillustration),ahapticcomponent(showninFIG.6Casthehapticsignalpatternineachillustration),oracombinationthereof.Emojiswithavisualcomponent(e.g.,apictogram,logogram,orideogram)areassociatedwithinthelexicontoanimageorvideoasset(e.g.,ajpeg,gif,mov,orjsonfile).Emojiswithanaudiocomponentareassociatedwithinthelexicontoanaudioasset(e.g.,awayormp3file).Emojiswithahapticcomponentareassociatedwithinthelexicontoahapticsignal(e.g.,parameterinformationoninterval,pitch,amplitude,oracombinationthereofforatouchmessagetobeperceivedbyareceivinguser'sbody),whichcanbeconvertedintohapticoutput615.

Thehapticsignalforeachemojimaybepre-generated.Insomeinstances,thehapticsignalisconfiguredwithparameterinformationforinterval,pitch,andamplitudetogeneratepatternsforthehapticoutput610thatmatchtheimageoranimationoftheemojiand/orthesoundeffectoftheemoji(i.e.,theimageoraudiocomponentsupplementtheunderstandingofthehapticcomponent).Inotherinstances,thehapticsignalisconfiguredwithparameterinformationdeterminedbyauser(e.g.,aperceptualscientist)togeneratepatternsforthehapticoutput610thatbestcommunicatetheemotiontoauser(i.e.,thehapticcomponenthasahighlikelihoodofconveyingtheemotiontoauserwithouttheimageoraudiocomponent).Inotherinstances,thehapticsignalisconfiguredwithparameterinformationdeterminedbyauser(e.g.,auseroftheHMDdevice)togeneratepatternsforthehapticoutput610thatcustomizetouchcommunicatetoauser(i.e.,thehapticcomponentiscustomizedforconveyingtheemotiontoauserwithorwithouttheimageoraudiocomponent).

Alexiconsignalconverter625convertstheinputdata605intohapticsignals610usingthelexiconofemojis615.Thelexiconsignalconverter620maybeacomponentinasignalgenerator(e.g.,signalgenerator555describedwithrespecttoFIG.5).Thelexiconsignalconverter620comprisesaninputdataprocessingmodule630,apatternrecognitionmodule635,andaqueryengine640.Theinputprocessingmodule625determinesthecharacteristicsoftheinputdata605received(e.g.,text,audio,imagesorvideo,sensordata,orthelike)usingtheinputdatamodule630,identifiesakeyorattributeswithintheinputdata605usingthepatternrecognitionmodule635,andcommunicatesthekeyorattributestothequeryengine640forsearchingthelexiconofemojis615toidentifyoneormoreemojisassociatedwithanelectroniccommunication.

FIG.7isaflowchartillustratingaprocess700forconvertinginputdatatohapticoutputusingalexiconofemojisaccordingtovariousembodiments.TheprocessingdepictedinFIG.7maybeimplementedinsoftware(e.g.,code,instructions,program)executedbyoneormoreprocessingunits(e.g.,processors,cores)oftherespectivesystems,hardware,orcombinationsthereof.Thesoftwaremaybestoredonanon-transitorystoragemedium(e.g.,onamemorydevice).ThemethodpresentedinFIG.7anddescribedbelowisintendedtobeillustrativeandnon-limiting.AlthoughFIG.7depictsthevariousprocessingstepsoccurringinaparticularsequenceororder,thisisnotintendedtobelimiting.Incertainalternativeembodiments,thestepsmaybeperformedinsomedifferentorderorsomestepsmayalsobeperformedinparallel.Incertainembodiments,suchasinanembodimentdepictedinFIG.1,2A,2B,3A,3B,4A,4B,4C,5,or6A-6C,theprocessingdepictedinFIG.7maybeperformedbyasocialcommunicationplatformorsystemthatfacilitatestouchcommunicationwithusers.

Atstep705,inputdataisobtainedfromaclientsystemofafirstuser(e.g.,capturedusingoneormoresensors).Insomeinstances,theoneormoresensorscaptureinputdataincludingimagesofavisualfieldofthefirstuserwearingahead-mounteddevicecomprisingadisplaytodisplaycontenttothefirstuser.Theinputdataincludes:(i)dataregardingactivityoftheuserinanextendedrealityenvironment(e.g.,imagesandaudiooftheuserinteractinginthephysicalenvironmentand/orthevirtualenvironment),(ii)datafromexternalsystems,or(iii)both.Insomeinstances,thedataregardingactivityoftheuserincludestext,audio,imagesorvideo,sensordata,orthelike.

Atstep710,featuresareextractedfromtheinputdatathatcorrespondtoanelectroniccommunication.Theextractingcomprisesdeterminingcharacteristicsoftheinputdataandidentifyingpatternswithintheinputdatathatcorrespondtoakeyorattributesofelectroniccommunicationbasedonthecharacteristics.Thekeyorattributesaretheextractedfeatures.

Atstep715,anemoji(e.g.,ahapticemoji)isidentifiedfromalexiconofemojisbasedontheextractedfeatures.Theidentifyingtheemojicomprisesconstructingaqueryusingtheextractedfeaturesasparametersofthequeryandexecutingthequeryonthelexiconofemojis.

Atstep720,digitalassetsareobtainedfortheemoji.Thedigitalassetscompriseahapticsignalconfiguredwithparameterinformationtogeneratepatternsforhapticoutput.Insomeinstances,thedigitalassetsfurthercompriseanimageorvideoasset,anaudioasset,orboth.Thehapticsignalfortheemojimaybepre-generated.Insomeinstances,thehapticsignalisconfiguredwiththeparameterinformationforinterval,pitch,andamplitudetogeneratethepatternsforthehapticoutput.Insomeinstances,thehapticsignalisconfiguredwithparameterinformationforinterval,pitch,andamplitudetogeneratepatternsforthehapticoutputthatmatchtheimageoranimationoftheemojiand/orthesoundeffectoftheemoji.Inotherinstances,thehapticsignalisconfiguredwithparameterinformationdeterminedbyauser(e.g.,thefirstuseroranotheruser)togeneratepatternsforthehapticoutputthatcommunicateanemotionviatouchcommunicationtotheseconduser.

Atstep725,thedigitalassetsaretransmittedtoadeviceofaseconduser.Insomeinstances,thedeviceisanotherhead-mounteddevice.Thedeviceisconfiguredtoconvertthehapticsignaltothehapticoutputbasedontheparameterinformationinordertoconveyatouchmessageasatleastpartoftheelectroniccommunicationtotheseconduserviaahapticdevice.Insomeinstances,thehapticoutputisgeneratedwithvirtualcontent(e.g.,theimageoranimationoftheemojiand/orthesoundeffectoftheemoji)thatisgeneratedandrenderedbytheclientsystemintheextendedrealityenvironmentdisplayedtotheuserbasedonthedigitalassets(e.g.,theimageorvideoasset,theaudioasset,orboth).

TouchCommunicationUsingAIBasedSystem

Apredictionmodel825canbeamachine-learningmodel,suchasaconvolutionalneuralnetwork(“CNN”),e.g.,aninceptionneuralnetwork,aresidualneuralnetwork(“Resnet”),orarecurrentneuralnetwork,e.g.,longshort-termmemory(“LSTM”)modelsorgatedrecurrentunits(“GRUs”)models,othervariantsofDeepNeuralNetworks(“DNN”)(e.g.,amulti-labeln-binaryDNNclassifierormulti-classDNNclassifier).Apredictionmodel125canalsobeanyothersuitableMLmodeltrainedforprovidingarecommendation,suchasaGenerativeadversarialnetwork(GAN),NaiveBayesClassifier,LinearClassifier,SupportVectorMachine,BaggingModelssuchasRandomForestModel,BoostingModels,ShallowNeuralNetworks,orcombinationsofoneormoreofsuchtechniques—e.g.,CNN-HMMorMCNN(Multi-ScaleConvolutionalNeuralNetwork).Themachine-learningpredictionsystem800mayemploythesametypeofpredictionmodelordifferenttypesofpredictionmodelsforpredictinghapticemojisforconveyingatouchmessage.Stillothertypesofpredictionmodelsmaybeimplementedinotherexamplesaccordingtothisdisclosure.

Totrainthevariouspredictionmodels825,thetrainingstage810iscomprisedoftwomaincomponents:datasetpreparationmodule830andmodeltrainingframework840.Thedatasetpreparationmodule830performstheprocessesofloadingdataassets845,splittingthedataassets845intotrainingandvalidationsets845a-nsothatthesystemcantrainandtestthepredictionmodels825,andpre-processingofdataassets845.Thesplittingthedataassets845intotrainingandvalidationsets845a-nmaybeperformedrandomly(e.g.,a90/10%or70/30%)orthesplittingmaybeperformedinaccordancewithamorecomplexvalidationtechniquesuchasK-FoldCross-Validation,Leave-one-outCross-Validation,Leave-one-group-outCross-Validation,NestedCross-Validation,ortheliketominimizesamplingbiasandoverfitting.

Themodeltrainingstage810outputstrainedmodelsincludingoneormoretrainedpredictionmodels860.Theoneormoretrainedpredictionmodels855maybedeployedandusedintheimplementationstage820topredictahapticemojiorhapticsignal865forconveyingatouchmessage.Forexample,predictionmodels860mayreceiveinputdata870(e.g.,agesturebyafirstuser)orcontextdata(e.g.,atextmessagereceivedbyaseconduser),andpredictahapticemojiorhapticsignalbasedonfeaturesandrelationshipsbetweenfeaturesextractedfromwithintheinputdata870.

FIG.9isaflowchartillustratingaprocess900topredicthapticemojisforconveyingatouchmessageaccordingtovariousembodiments.TheprocessingdepictedinFIG.9maybeimplementedinsoftware(e.g.,code,instructions,program)executedbyoneormoreprocessingunits(e.g.,processors,cores)oftherespectivesystems,hardware,orcombinationsthereof.Thesoftwaremaybestoredonanon-transitorystoragemedium(e.g.,onamemorydevice).ThemethodpresentedinFIG.9anddescribedbelowisintendedtobeillustrativeandnon-limiting.AlthoughFIG.9depictsthevariousprocessingstepsoccurringinaparticularsequenceororder,thisisnotintendedtobelimiting.Incertainalternativeembodiments,thestepsmaybeperformedinsomedifferentorderorsomestepsmayalsobeperformedinparallel.Incertainembodiments,suchasinanembodimentdepictedinFIG.1,2A,2B,3A,3B,4A,4B,4C,5,or8,theprocessingdepictedinFIG.9maybeperformedbyasocialcommunicationplatformorsystemthatfacilitatestouchcommunicationwithusers.

Atstep905,inputdataisobtainedfromaclientsystemofafirstuser(e.g.,capturedusingoneormoresensors).Insomeinstances,theoneormoresensorscaptureinputdataincludingimagesofavisualfieldofthefirstuserwearingahead-mounteddevicecomprisingadisplaytodisplaycontenttothefirstuser.Theinputdataincludes:(i)dataregardingactivityoftheuserinanextendedrealityenvironment(e.g.,imagesandaudiooftheuserinteractinginthephysicalenvironmentand/orthevirtualenvironment),(ii)datafromexternalsystems,or(iii)both.Insomeinstances,thedataregardingactivityoftheuserincludestext,audio,imagesorvideo,sensordata,orthelike.

Atstep910,predictingahapticemojiorahapticsignalbasedontheinputdataandmodelparameterslearnedfromhistoricalinputdata(e.g.,agesturebyafirstuser)andcontextdata(e.g.,atextmessagereceivedbyaseconduser).

Atoptionalstep915(instancesofpredictingahapticemoji),digitalassetsareobtainedfortheemoji.Thedigitalassetscompriseahapticsignalconfiguredwithparameterinformationtogeneratepatternsforhapticoutput.Insomeinstances,thedigitalassetsfurthercompriseanimageorvideoasset,anaudioasset,orboth.Thehapticsignalfortheemojimaybepre-generated.Insomeinstances,thehapticsignalisconfiguredwiththeparameterinformationforinterval,pitch,andamplitudetogeneratethepatternsforthehapticoutput.Insomeinstances,thehapticsignalisconfiguredwithparameterinformationforinterval,pitch,andamplitudetogeneratepatternsforthehapticoutputthatmatchtheimageoranimationoftheemojiand/orthesoundeffectoftheemoji.Inotherinstances,thehapticsignalisconfiguredwithparameterinformationdeterminedbyauser(e.g.,thefirstuseroranotheruser)togeneratepatternsforthehapticoutputthatcommunicateanemotionviatouchcommunicationtotheseconduser.

Atstep920,thedigitalassetsorhapticsignalaretransmittedtoadeviceofaseconduser.Insomeinstances,thedeviceisanotherhead-mounteddevice.Thedeviceisconfiguredtoconvertthehapticsignaltothehapticoutputbasedontheparameterinformationinordertoconveyatouchmessageasatleastpartoftheelectroniccommunicationtotheseconduserviaahapticdevice.Insomeinstances,thehapticoutputisgeneratedwithvirtualcontent(e.g.,theimageoranimationoftheemojiand/orthesoundeffectoftheemoji)thatisgeneratedandrenderedbytheclientsystemintheextendedrealityenvironmentdisplayedtotheuserbasedonthedigitalassets(e.g.,theimageorvideoasset,theaudioasset,orboth).

LearningProgramtoFacilitateLearningoftheHapticOutput

Theinputdata1005maybetext,audio,imagesorvideo,sensordata,orthelike.Theadditionalinformation1030mayincludeatextdescriptionofthetouchcommunicationconveyedbythehapticsignal(e.g.,forawavehapticsignal,thetextcouldsay“sendinguser”waveshelloto“receivinguser”),anaudiocomponentcorrespondingtoahapticsignal(e.g.,alaughingsoundcorrespondingtoaHaHaHahapticsignal),animagecomponentcorrespondingtoahapticsignal(e.g.,acharactergivingathumbsdownforanopehapticsignal),oracombinationthereof.

Inotherinstances,wheretheartificialintelligencebasedsystem1020predictsahapticemojiorhapticsignal,thelearningmodule1025takesasinputthehapticsignal(orcorrespondinghapticemojiinformation)anddetermines,usingoneormorerules,logic,ormachine-learningmodels,additionalinformation1030(e.g.,anaudiocomponentoranimagecomponent)thatcouldbeusedtosupplementthehapticsignal.Forexample,thelearningmodule1025mayuseoneormorerules,logic,ormachine-learningmodelstodetermineatextcomponent,anaudiocomponentand/oranimagecomponentthatcouldbeusedtosupplementthehapticsignal(orcorrespondinghapticemojiinformation),thenretrievethetextcomponent,theaudiocomponentand/ortheimagecomponentfromthedatastoragedevice1035orasecondarydatastoragedevice1040(e.g.,aremotestoragedeviceorthird-partystoragedevice)andforwardalongwiththehapticcomponent.

Thebenefitsandadvantagesofthisapproacharethatthereceivingusermaymoreeasilylearnthehapticoutputpatternsandassociatedmeaningbasedonassociatedvisualand/oraudiocontext.Forexample,thelearningmodule1025maybeconfiguredtotransmitthehapticsignalalongwithavisualand/oraudiosignaltothereceivingusersuchthatwhentheuserfeelsthehapticoutput1010basedonthehapticsignaltheuserconcurrentlyvisualizesonadisplaythevisualsignal(e.g.,avisualemoji)and/orhearstheaudiosignal,theuserlearnstoassociatethehapticoutputpatternwithanassociatedvisualand/oraudiocontext.Thevisualand/oraudiosignalmaybeobtainedaspartoftheadditionalinformation1030andassociatedandtransmittedwiththehapticsignalbythelearningmodule1025.Additionallyoralternatively,thevisualand/oraudiosignalmaybegeneratedbasedontheadditionalinformation1030bythelearningmodule1025,andassociatedandtransmittedwiththehapticsignalbythelearningmodule1025.

FIG.11isaflowchartillustratingaprocess1100forsupplementingahapticsignalwithadditionalinformationtofacilitateauserlearningahapticoutputinaccordancewithvariousembodiments.TheprocessingdepictedinFIG.11maybeimplementedinsoftware(e.g.,code,instructions,program)executedbyoneormoreprocessingunits(e.g.,processors,cores)oftherespectivesystems,hardware,orcombinationsthereof.Thesoftwaremaybestoredonanon-transitorystoragemedium(e.g.,onamemorydevice).ThemethodpresentedinFIG.11anddescribedbelowisintendedtobeillustrativeandnon-limiting.AlthoughFIG.11depictsthevariousprocessingstepsoccurringinaparticularsequenceororder,thisisnotintendedtobelimiting.Incertainalternativeembodiments,thestepsmaybeperformedinsomedifferentorder,orsomestepsmayalsobeperformedinparallel.Incertainembodiments,suchasinanembodimentdepictedinFIG.1,2A,2B,3A,3B,4A,4B,4C,5,6A-6C,8,or10theprocessingdepictedinFIG.11maybeperformedbyasocialcommunicationplatformorsystemthatfacilitatestouchcommunicationwithusers.

Atstep1105,inputdataisobtainedfromaclientsystemofafirstuser(e.g.,capturedusingoneormoresensors).Insomeinstances,theoneormoresensorscaptureinputdataincludingimagesofavisualfieldofthefirstuserwearingahead-mounteddevicecomprisingadisplaytodisplaycontenttothefirstuser.Theinputdataincludes:(i)dataregardingactivityoftheuserinanextendedrealityenvironment(e.g.,imagesandaudiooftheuserinteractinginthephysicalenvironmentand/orthevirtualenvironment),(ii)datafromexternalsystems,or(iii)both.Insomeinstances,thedataregardingactivityoftheuserincludestext,audio,imagesorvideo,sensordata,orthelike.

Atstep1110,anemoji(e.g.,ahapticemoji)orhapticsignalisidentifiedfromalexiconofemojisoranartificialintelligencebasedsystem,asdescribedwithrespecttoFIGS.6A-6C,7,8,and9.

Atstep1115,additionalinformationisobtainedbasedontheemojiorhapticsignal.Theadditionalinformationmayincludeatextdescriptionofthetouchcommunicationconveyedbythehapticsignal(e.g.,forawavehapticsignal,thetextcouldsay“sendinguser”waveshelloto“receivinguser”),anaudiocomponentcorrespondingtoahapticsignal(e.g.,alaughingsoundcorrespondingtoaHaHaHahapticsignal),animagecomponentcorrespondingtoahapticsignal(e.g.,acharactergivingathumbsdownforanopehapticsignal),oracombinationthereof.

Atstep1120,thehapticsignalandadditionalinformationaretransmittedtoadeviceofaseconduser.Insomeinstances,thedeviceisanotherhead-mounteddevice.Thedeviceisconfiguredtoconvertthehapticsignaltothehapticoutputbasedontheparameterinformationinordertoconveyatouchmessageasatleastpartoftheelectroniccommunicationtotheseconduserviaahapticdevice.Thehapticoutputisgeneratedwithvirtualcontent(e.g.,theimageoranimationoftheemojiand/orthesoundeffectoftheemoji),whichisgeneratedandrenderedbytheclientsystemintheextendedrealityenvironmentdisplayedtotheuserbasedontheadditionalinformation(e.g.,thetext,theimageorvideo,theaudio,oranycombinationthereof).

ReceivingtheHapticSignalandGeneratingtheHapticOutput

Theprocessor1215readsinstructionsfromthememory1230andexecutesthemtoperformvariousoperations.Theprocessor1215maybeembodiedusinganysuitableinstructionsetarchitectureandmaybeconfiguredtoexecuteinstructionsdefinedinthatinstructionsetarchitecture.Theprocessor1215maybegeneral-purposeorembeddedprocessorsusinganyofavarietyofinstructionsetarchitectures(ISAs),suchasthex86,PowerPC,SPARC,RISC,ARMorMIPSISAs,oranyothersuitableISA.AlthoughasingleprocessorisillustratedinFIG.12,thesignalgenerator1200mayincludemultipleprocessors.

Thehapticinterfacecircuit1220isacircuitthatinterfaceswiththecutaneousactuators1205.Thehapticinterfacecircuit1220generatesactuatorsignals1210basedoncommandsfromtheprocessor1215.Forthispurpose,thehapticinterfacecircuit1220mayinclude,forexample,adigital-to-analogconverter(DAC)forconvertingdigitalsignalsintoanalogsignals.Thehapticinterfacecircuit1220mayalsoincludeanamplifiertoamplifytheanalogsignalsfortransmittingtheactuatorsignals1210overcablesbetweenthesignalgenerator1200andthecutaneousactuators1205.Insomeembodiments,thehapticinterfacecircuit1220communicateswiththeactuators1205wirelessly.Insuchembodiments,thehapticinterfacecircuit1220includescomponentsformodulatingwirelesssignalsfortransmittingtotheactuator1205overwirelesschannels.

Thecommunicationmodule1225(e.g.,receivingdevice570describedwithrespecttoFIG.5)ishardwareorcombinationsofhardware,firmwareandsoftwareforcommunicatingwithothercomputingdevices.Thecommunicationmodule1225may,forexample,enablethesignalgenerator1200tocommunicatewithasocialnetworkingsystem,atransmittingorsendingclientsystem,oranelectroniccommunicationsourceoverthenetwork.Thecommunicationmodule1225maybeembodiedasanetworkcard.Thememory1230isanon-transitorycomputerreadablestoragemediumforstoringsoftwaremodules.Softwaremodulesstoredinthememory1230mayinclude,amongothers,applications1240andahapticsignalprocessor1245(e.g.,thesignalprocessor547describedwithrespecttoFIG.5).Thememory1230mayincludeothersoftwaremodulesnotillustratedinFIG.8,suchasanoperatingsystem.Theapplications1240mayusehapticoutputviathecutaneousactuators1205toperformvariousfunctions,suchaselectroniccommunication,gaming,andentertainment.

Thesignalgenerator1200asillustratedinFIG.12ismerelyillustrativeandvariousmodificationmaybemadetothesignalgenerator1200.Forexample,insteadofembodyingthesignalgenerator1200asasoftwaremodule,thesignalgenerator1200maybeembodiedasahardwarecircuit,oracombinationofhardwarecircuitsandsoftwaremodules.

FIG.13isaflowchartillustratingaprocess1300forgeneratingahapticoutputinaccordancewithvariousembodiments.TheprocessingdepictedinFIG.13maybeimplementedinsoftware(e.g.,code,instructions,program)executedbyoneormoreprocessingunits(e.g.,processors,cores)oftherespectivesystems,hardware,orcombinationsthereof.Thesoftwaremaybestoredonanon-transitorystoragemedium(e.g.,onamemorydevice).ThemethodpresentedinFIG.13anddescribedbelowisintendedtobeillustrativeandnon-limiting.AlthoughFIG.13depictsthevariousprocessingstepsoccurringinaparticularsequenceororder,thisisnotintendedtobelimiting.Incertainalternativeembodiments,thestepsmaybeperformedinsomedifferentorder,orsomestepsmayalsobeperformedinparallel.Incertainembodiments,suchasinanembodimentdepictedinFIG.1,2A,2B,3A,3B,4A,4B,4C,5,6A-6C,8,10,or12theprocessingdepictedinFIG.13maybeperformedbyasocialcommunicationplatformorsystemthatfacilitatestouchcommunicationwithusers.

Atstep1315,theoneormoreactuatorsignalsaregeneratedbasedontheparametersdeterminedfortheoneormoreactuatorsignals.Thegeneratingoftheoneormoreactuatorsignalsmayincludeperformingdigitaltoanalogconversionofthehapticsignaland/oroneormoreactuatorsignals.

Atstep1320,theoneormoreactuatorsignalsaretransmittedtooneormorecorrespondingcutaneousactuators.

Atstep1325,oneormorecutaneousactuatorsgeneratehapticoutputinaccordancewiththecorrespondingoneormoreactuatorsignals,whichcauseoneormoreofvarioustypesofcutaneousfeedback,includingvibration,force,traction,texture,and/ortemperatureontheseconduser'sbody.Insomeinstances,thehapticoutputisgeneratedwithvirtualcontent(e.g.,theimageoranimationoftheemojiand/orthesoundeffectoftheemoji),whichisgeneratedandrenderedbytheclientsystemintheextendedrealityenvironmentdisplayedtotheuserbasedontheadditionalinformation(e.g.,thetext,theimageorvideo,theaudio,oranycombinationthereof).

ADDITIONALCONSIDERATIONS

Althoughspecificexampleshavebeendescribed,variousmodifications,alterations,alternativeconstructions,andequivalentsarepossible.Examplesarenotrestrictedtooperationwithincertainspecificdataprocessingenvironments,butarefreetooperatewithinapluralityofdataprocessingenvironments.Additionally,althoughcertainexampleshavebeendescribedusingaparticularseriesoftransactionsandsteps,itshouldbeapparenttothoseskilledintheartthatthisisnotintendedtobelimiting.Althoughsomeflowchartsdescribeoperationsasasequentialprocess,manyoftheoperationsmaybeperformedinparallelorconcurrently.Inaddition,theorderoftheoperationsmayberearranged.Aprocessmayhaveadditionalstepsnotincludedinthefigure.Variousfeaturesandaspectsoftheabove-describedexamplesmaybeusedindividuallyorjointly.

Further,whilecertainexampleshavebeendescribedusingaparticularcombinationofhardwareandsoftware,itshouldberecognizedthatothercombinationsofhardwareandsoftwarearealsopossible.Certainexamplesmaybeimplementedonlyinhardware,oronlyinsoftware,orusingcombinationsthereof.Thevariousprocessesdescribedhereinmaybeimplementedonthesameprocessorordifferentprocessorsinanycombination.

Wheredevices,systems,componentsormodulesaredescribedasbeingconfiguredtoperformcertainoperationsorfunctions,suchconfigurationmaybeaccomplished,forexample,bydesigningelectroniccircuitstoperformtheoperation,byprogrammingprogrammableelectroniccircuits(suchasmicroprocessors)toperformtheoperationsuchasbyexecutingcomputerinstructionsorcode,orprocessorsorcoresprogrammedtoexecutecodeorinstructionsstoredonanon-transitorymemorymedium,oranycombinationthereof.Processesmaycommunicateusingavarietyoftechniquesincludingbutnotlimitedtoconventionaltechniquesforinter-processcommunications,anddifferentpairsofprocessesmayusedifferenttechniques,orthesamepairofprocessesmayusedifferenttechniquesatdifferenttimes.

Specificdetailsaregiveninthisdisclosuretoprovideathoroughunderstandingoftheexamples.However,examplesmaybepracticedwithoutthesespecificdetails.Forexample,well-knowncircuits,processes,algorithms,structures,andtechniqueshavebeenshownwithoutunnecessarydetailinordertoavoidobscuringtheexamples.Thisdescriptionprovidesexampleexamplesonly,andisnotintendedtolimitthescope,applicability,orconfigurationofotherexamples.Rather,theprecedingdescriptionoftheexampleswillprovidethoseskilledintheartwithanenablingdescriptionforimplementingvariousexamples.Variouschangesmaybemadeinthefunctionandarrangementofelements.

Thespecificationanddrawingsare,accordingly,toberegardedinanillustrativeratherthanarestrictivesense.Itwill,however,beevidentthatadditions,subtractions,deletions,andothermodificationsandchangesmaybemadethereuntowithoutdepartingfromthebroaderspiritandscopeassetforthintheclaims.Thus,althoughspecificexampleshavebeendescribed,thesearenotintendedtobelimiting.Variousmodificationsandequivalentsarewithinthescopeofthefollowingclaims.

Intheforegoingdescription,forthepurposesofillustration,methodsweredescribedinaparticularorder.Itshouldbeappreciatedthatinalternateexamples,themethodsmaybeperformedinadifferentorderthanthatdescribed.Itshouldalsobeappreciatedthatthemethodsdescribedabovemaybeperformedbyhardwarecomponentsormaybeembodiedinsequencesofmachine-executableinstructions,whichmaybeusedtocauseamachine,suchasageneral-purposeorspecial-purposeprocessororlogiccircuitsprogrammedwiththeinstructionstoperformthemethods.Thesemachine-executableinstructionsmaybestoredononeormoremachinereadablemediums,suchasCD-ROMsorothertypeofopticaldisks,floppydiskettes,ROMs,RAMs,EPROMs,EEPROMs,magneticoropticalcards,flashmemory,orothertypesofmachine-readablemediumssuitableforstoringelectronicinstructions.Alternatively,themethodsmaybeperformedbyacombinationofhardwareandsoftware.

Wherecomponentsaredescribedasbeingconfiguredtoperformcertainoperations,suchconfigurationmaybeaccomplished,forexample,bydesigningelectroniccircuitsorotherhardwaretoperformtheoperation,byprogrammingprogrammableelectroniccircuits(e.g.,microprocessors,orothersuitableelectroniccircuits)toperformtheoperation,oranycombinationthereof.

Whileillustrativeexamplesoftheapplicationhavebeendescribedindetailherein,itistobeunderstoodthattheinventiveconceptsmaybeotherwisevariouslyembodiedandemployed,andthattheappendedclaimsareintendedtobeconstruedtoincludesuchvariations,exceptaslimitedbythepriorart.

THE END
1.在线学自考app下载在线学自考安卓下载在线学自考2024相似推荐 全能考试系统app 网课学习 通用在线学 伴我考开放学 在线学教师证app 从零开始学英语app 百词斩 流利说英语在线学自考app下载、在线学自考安卓下载、在线学自考app、广告手机版专区 电脑版|APP客户端 声明:为严格遵守广告法,站点已将"第一","最"等极限词汇屏蔽,请知悉 https://m.liqucn.com/rj/1046316.wml
2.StudentOnlineLeaveSystemjava源码下载平台StudentOnlineLeaveSystemPe**er 在2024-12-17 04:03:22 访问0 Bytes 学生请假管理及信息化操作系统_管理系统_毕业设计源码点赞(0) 踩踩(0) 反馈 访问所需:1 积分 电信网络下载 访问申明(访问视为同意此申明) 1.在网站平台的任何操作视为已阅读和同意网站底部的版权及免责申明 2.部分网络用户分享TXT文件https://java.code.coder100.com/index/index/content/id/64431
3.有道领世网课app下载有道领世appv1.4.1安卓版分类:学习教育 大小:66.1M 语言:中文 版本:v1.4.1 安卓版 时间:2024-12-12 12:38 星级: 官网:https://c.youdao.com/ydls/index.html 厂商:有道(广州)计算机系统有限公司 平台:Android 标签:在线教育让学习更智慧简单。应用截图应用介绍 有道领世app是一款在线教育学习平台,平台拥有丰富的学习内容,拥有学情https://www.qqtn.com/azsoft/767756.html
4.[开题]基于JAVA计算机毕业设计大学四六级英语考试自主学习平台(附随着全球化和信息化的快速发展,英语已成为国际交流的重要工具。在中国,大学四六级英语考试作为衡量大学生英语能力的重要标准,其重要性不言而喻。然而,传统的英语教学模式往往侧重于教师的讲授,学生的自主学习能力和个性化需求难以得到充分满足。为了应对这一挑战,构建一个大学四六级英语考试自主学习平台显得尤为重要。该平https://zhuanlan.zhihu.com/p/12511343195
5.在线学习管理系统:重塑教育未来的创新工具在线学习管理系统(Learning Management System,简称LMS)是一种基于互联网技术的教育平台,旨在为用户提供全面、便捷、高效的在线学习体验。它集成了课程管理、学员管理、考试管理、数据分析等多种功能,为教育机构、企业和个人提供了全方位的学习支持。 二、在线学习管理系统的优势 https://www.pbids.com/aboutUs/pbidsNews/1861300265132593152
6.Elearning学习系统管理办法经管文库(原现金交E-learning学习系统管理办法 https://bbs.pinggu.org/forum.php?mod=viewthread&tid=13329118&ordertype=2
7.digitaleskursnotenbuch2opensource329.42KB其它《Digitales Kursnotenbuch 2:开源的中学课程管理系统》在信息化教育日益发展的今天,有效的教学管理工具成为了提升教育效率的关键。Digitales Kursnotenbuch 2是一个专为中学II级设计的开源课程成绩管理系统,它采用PHP编程语言和MySQL数据库,为文法学校提供安全、高效且用户友好的成绩记录和管理解决方案。 系统的核心是其https://kaledl.com/down/8564154.html
8.英语培训机构教务管理系统排名,哪个好用培训机构学校教务管理家长端、教师端、校长端:学员(家长)可以随时查看学习情况,接收各种通知;教师可以线上处理事务,给学员签到、布置作业;学员可以根据自己的时间安排线上约课、请假;学员和教师都可以通过手机随时查看自己的上课统计;任课老师可以手机端给学员写课评、上传课堂图片视频。点击https://m.163.com/dy/article/JJQ90SVM0552FUQR.html
9.OnlineLearningSystemTheSims4ModsLearn up to 40+ Skills (incl. Vet & Hidden Skills) online via the Online Learning System (OLS)https://www.curseforge.com/sims4/mods/online-learning-system
10.专业毕业设计源码63421onlinelearningsystem计算机Springboot Python online learning system Abstract With the rapid development of science and technology, traditional education and management have been greatly impacted. The ways, AIDS and tools of education are also changing with each passing day. Online education is a brand-new educational model, ithttps://blog.csdn.net/m0_57774396/article/details/140100357
11.科技资讯数据资讯邀测申请地址: https://data.aliyun.com/paionlinelearning 打开新闻客户端,往往会收到热点新闻推送相关的内容。新闻客户端作为一个承载新闻的平台,实时会产生大量的 新闻,如何快速挖掘出哪些新产生的新闻会成为成为热点新闻,决定着整个平台的新闻推荐质量。 如何从平台中海量的新闻素材中找到最有潜力成为热点的新闻http://www.forenose.com/column/blocked/10.html?mid=2&p=12
12.JOLTMERLOT Journal of Online Learning and Teaching Vol. 5, No. 2, June 2009 Integrating Online Multimedia into College Course and Classroom: With However, many distributors are now offering digital licenses or closed-system streaming rights for such purposes along with sale of their videos, https://jolt.merlot.org/vol5no2/miller_0609.htm
13.PlanningEveryone in this course will be building an online learning community, a site where users teach each other. The work may be done alone or in A good client would be a medium-sized company that wants a knowledge-sharing system for employees. A good client would be a student group at http://philip.greenspun.com/seia/planning
14.2016华南理工大学网络教育专升本入学考试《大学英语》测试4. Which of the following is the main factor that makes it difficult to define students' perceptions of online learning definitely? A. Learners' varied locations. B. Learners' varied characteristics. C. Learners' varied communication skills. http://www.5184pass.com/aspcms/news/2016-8-8/4529.html
15.SmartGroup:AToolforSmallDevelopment of a class model for improving creative collaboration based on the online learning system (Moodle) in Korea. J. Open Innov. Technol. Mark. Complex. 2019, 5, 67. [Google Scholar] [CrossRef] [Green Version] Motz, B.A.; Quick, J.D.; Morrone, A.S.; Flynn, R.; Blumberg,https://www.mdpi.com/1999-5903/15/1/7/xml
16.机器学习PAI全新功效——实时新闻热点OnlineLearning实践邀测申请地址:https://data.aliyun.com/paionlinelearning 打开新闻客户端,往往会收到热点新闻推送相关的内容。新闻客户端作为一个承载新闻的平台,实时会产生大量的 新闻,如何快速挖掘出哪些新产生的新闻会成为成为热点新闻,决定着整个平台的新闻推荐质量。 https://maimai.cn/article/detail?fid=1092991292&efid=q4lYsgkD4uccYLTNjKAn9A
17.GenerativeProgramming–LearnProgrammingBuild your software system in a way that it can scale up. Your backend infrastructure should beAnother significant advantage of learning software programming is the potential for financial independenceGaming companies can make their online stores easy for customers to utilize through Magento’s http://generative-programming.org/
18.FrontiersSelfThis model relies on a physical model of the vocal tract, the auditory system and the agent's motor control as well as vocalizations of socialMoreover, we adapted this online version of EM to introduce a learning rate parameter α which decreases logarithmically from 0.1 to 0.01 over https://www.frontiersin.org/articles/10.3389/fpsyg.2013.01006/full
19.EnglishModule1.4online learning in the sense of distance learning on the Internet. Because of a lack of agreement on what e-learning is all about, it probably makes sense to use the term online learning when talking about distance learning on the Internet and to use CALL (Computer Assisted Language Learninghttp://www.ict4lt.org/en/en_mod1-4.htm
20.2023年教育报告(英)472页.docChapterA,Theoutputofeducationalinstitutionsandtheimpactoflearning,containsindicatorsontheoutput,outcomesand impactofeducationintheformoftheoverallattainmentofthetheUnitedNationsChildren’sFund(UNICEF).InformationaboutPoland’seducationsystemand enrolmentproceduresforUkrainianswasalsopublishedonline,ontheMinistryofEducatihttps://m.book118.com/html/2024/0130/6035035132010042.shtm
21.ApplicationsofreinforcementlearninginenergysystemsPublications of the energy system domain are divided into 11 subgroups and reviewed. ? Many publications report 10–20% performance improvement. ? Deep learning techniques and state-of-the-art actor-critic methods were not used by many articles. ? Batch reinforcement learning algorithms havehttps://www.sciencedirect.com/science/article/pii/S1364032120309023
22.Adoptionofblendedlearning:ChineseuniversitystudentsAgainst the backdrop of the deep integration of the Internet with learning, blended learning offers the advantages of combining online and face-to-face learning to enrich the learning experience and improve knowledge management. Therefore, the objective of this present study is twofold: a. to fillhttps://www.nature.com/articles/s41599-023-01904-7
23.MakingContentUsableforPeoplewithCognitiveandLearningIt gives advice on how to make content usable for people with cognitive and learning disabilitiesI need to understand the consequences of what I do online. Related Personas: Alison, George, Use a clear and easy layout to help users navigate the system easily. For example: Make https://www.w3.org/TR/coga-usable/
24.GitHubwildcard/awesomeCanvas LMS - Canvas is the trusted, open-source learning management system (LMS) that is revolutionizing the way we educate. (Demo, Source Code) AGPL-3.0 Ruby Chamilo LMS - Chamilo LMS allows you to create a virtual campus for the provision of online or semi-online training. (Source Codehttps://github.com/wildcard/awesome-selfhosted/
25.实现家园系统(2)教程addLocalListener("Show_Unlock_Button" + this.groupId, this.ensureNeeds.bind(this)); // 客户端监听是否显示解锁建筑按钮事件,事件名是 "Show_Unlock_Button" + 组号 this._listener2 = Event.addServerListener("Show_Unlock_Button" + this.groupId, this.ensureNeeds.bind(this)); } 12345678http://learning.ark.online/tycoon-course/4.2home-system2.html